id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
313822
https://en.wikipedia.org/wiki/Haddock
Haddock
The haddock (Melanogrammus aeglefinus) is a saltwater ray-finned fish from the family Gadidae, the true cods. It is the only species in the monotypic genus Melanogrammus. It is found in the North Atlantic Ocean and associated seas, where it is an important species for fisheries, especially in northern Europe, where it is marketed fresh, frozen and smoked; smoked varieties include the Finnan haddie and the Arbroath smokie. Other smoked versions include long boneless, the fileted side of larger haddock smoked in oak chips with the skin left on the fillet. Description The haddock has the elongated, tapering body shape typical of members of the cod family. It has a relatively small mouth which does not extend to below the eye; with the lower profile of the face being straight and the upper profile slightly rounded, this gives its snout a characteristic wedge-shaped profile. The upper jaw projects beyond the lower more so than in the Atlantic cod. There is a rather small barbel on the chin. There are three dorsal fins, the first being triangular in shape and these dorsal fins have 14 to 17 fin rays in the first, 20 to 24 in the second, and 19 to 22 in the third. There are also two anal fins and in these there are 21 to 25 fin rays in the first and 20 to 24 fin rays in the second. The anal and dorsal fins are all separated from each other. The pelvic fins are small with an elongated first fin ray. The upper side of the haddock's body varies in colour from dark grey brown to nearly black while the lower part of the body is dull silvery white. It has a distinctive black lateral line contrasting with the whitish background colour and which curves slightly over the pectoral fins. It also has a distinctive oval black blotch or ‘thumbprint’, sometimes called the "Devil's thumbprint", which sits between the lateral line and the pectoral fin, a feature which leads to the name of the genus Melanogrammus which derives from Greek "melanos" meaning "black" and "gramma" meaning letter or signal. The dorsal, pectoral, and caudal fins are dark grey in colour while the anal fins are pale matching the colour of the silvery sides, with black speckles at their bases. The pelvic fins are white with a variable amount of black spots. Occasionally there are differently coloured variants recorded which may be barred, golden on the back or lack the dark shoulder blotch. The longest haddock recorded was in length and weighed . However, haddock are rarely over in length and the vast majority of haddocks caught in the United Kingdom measure between . In eastern Canada waters, haddock range in size from in length and in weight. Distribution The haddock has populations on either side of the north Atlantic but it is more abundant in the eastern Atlantic than it is on the North American side. In the north-east Atlantic it occurs from the Bay of Biscay north to Spitzbergen; however, it is most abundant north of the English Channel. It also occurs around Novaya Zemlya and the Barents Sea in the Arctic. The largest stocks are in the North Sea, off the Faroe Islands, off Iceland and the coast of Norway but these are discrete populations with little interchange between them. Off North America, the haddock is found from western Greenland south to Cape Hatteras, but the main commercially fished stock occurs from Cape Cod and the Grand Banks. Habitat and biology The haddock is a demersal species which occurs at depths from , although it is most frequently recorded at . It is found over substrates made up of rock, sand, gravel or shells and it prefers temperatures of between . Off Iceland and in the Barents Sea, haddock undergo extensive migrations, but in the north western Atlantic its movements are more restricted, consisting of movements to and from their spawning areas. They reach sexual maturity at 4 years old in males and 5 years old in females, except for the population in the North Sea which matures at ages of 2 years in males and 3 years in females. The overall sex ratio is roughly 1:1, but in shallower areas, females predominate, while the males show a preference for waters further offshore. The fecundity of the females varies with size: a fish of length bears 55,000 eggs while a fish at has 1,841,000 eggs. Spawning takes place from depths of around . In the northwestern Atlantic spawning lasts from January to July, although it does not occur simultaneously in all areas, and in the northeastern Atlantic the spawning season runs from February to June, peaking in March and April. The eggs are pelagic with a diameter of , and they take one to three weeks to hatch. Following metamorphosis, the past larval fish remain pelagic until they attain a length of around , when they settle to a demersal habit. Their growth rate shows considerable regional variation and fish at one year old can measure , at 2 years old , up to at 13 years old. Their lifespan is around 14 years. The most important spawning grounds are in the waters off the central coast of Norway, off the southwest of Iceland, and over the Georges Bank. The fish which spawn in inshore waters are normally smaller and younger fish than those which occur in offshore areas. The younger fish have a spawning season which is less than half of that of the larger and older stock offshore. Once hatched the larvae do not appear to travel far from their spawning grounds, however some larvae spawning off the west coast of Scotland are transported into the North Sea through the Fair Isle-Shetland Gap or to the northeast of Shetland. In their larval stages, haddock mainly feed on the immature stages of copepods, ostracods and limacina with their diet changing as they grow, moving on to larger pelagic prey such as amphipods, euphausiids, eggs of invertebrates, zoea larvae of decapods and increasing numbers of copepods. Once they have reached the settled, demersal, post-larval stage, they gradually switch from pelagic to benthic prey. Adults primarily feed on benthic invertebrates such as sea urchins, brittlestars, bivalves and worms, however, they will feed opportunistically on smaller fish such as capelin, sandeels and Norway pout. Juvenile haddock are an important prey for larger demersal fish, including other gadoids, while seals prey on the larger fish. The recorded growth rates of haddock underwent significant change over the 30 to 40 years up to 2011. Growth has been more rapid in recent years, with haddock attaining adult size much earlier than was noted 30–40 years ago. However, the degree to which these larger, younger fish contribute to reproductive success of the population is unknown. The growth rates of haddock, however, have slowed in recent years. There is some evidence which indicates that these slower growth rates may be the result of an exceptionally large year class in 2003. The haddock stock periodically has higher than normal productivity; for example in 1962 and 1967, and to a lesser extent, 1974 and 1999. These result in a more southerly distribution of the fish and have a strong effect on the biomass of the spawning stock, but because of high fishing mortality, these revivals do not have any lasting effect on the population. In general, there was above average recruitment from the 1960s up to the early 1980s, similar to recruitment for Atlantic cod and whiting, this has been called the gadoid outburst. There was strong recruitment in 1999 but since then, the recruitment rate has been very low. Parasites Cod and related species are plagued by parasites. For example, the cod worm, Lernaeocera branchialis, starts life as a copepod, a small, free-swimming crustacean larva. The first host used by cod worm is a flatfish or lumpsucker, which they capture with grasping hooks at the front of their bodies. They penetrate the lumpsucker with a thin filament which they use to suck its blood. The nourished cod worms then mate on the lumpsucker. The female worm, with her now fertilized eggs, then finds a cod, or a cod-like fish such as a haddock or whiting. There, the worm clings to the gills while it metamorphoses into a plump, sinusoidal, wormlike body, with a coiled mass of egg strings at the rear. The front part of the worm's body penetrates the body of the cod until it enters the rear bulb of the host's heart. There, firmly rooted in the cod's circulatory system, the front part of the parasite develops like the branches of a tree, reaching into the main artery. In this way, the worm extracts nutrients from the cod's blood, remaining safely tucked beneath the cod's gill cover until it releases a new generation of offspring into the water. Taxonomy and etymology The haddock was first formally described as Gadus aeglefinus in 1758 by Carolus Linnaeus in the 10th edition of volume one of his Systema naturae with a type locality given as "European seas". In 1862 Theodore Nicholas Gill created the genus Melanogrammus with M. aeglefinus as its only species. The 5th edition of Fishes of the World classifies the haddock within the subfamily Gadinae, the typical cods, of the family Gadidae, which is within the superfamily Gadoidea of the order Gadiformes. The generic name Melanogrammus means "black line", a reference to the black lateral line of this species. The specific name is a latinisation of the vernacular names egrefin and eglefin, used in France and England. Fisheries Haddock is fished year-round using gear such as Danish seine nets, trawlers, long lines and gill nets and is often caught in mixed species fishery with other groundfish species such as cod and whiting. The main fishing grounds in the eastern Atlantic are in the Barents Sea, around Iceland, around the Faeroe Islands, in the North Sea, Celtic Sea, and in the English Channel. Landings in the eastern Atlantic have fluctuated around 200–350 thousand tonnes in the period 1980–2017. During the 1980s, the largest portion of the catch was taken at Rockall but from about 2000, the majority of the catch is caught in the Barents Sea. All the stocks in eastern Atlantic are assessed by ICES, which publish a recommendations on an annual basis for Total Allowable Catch. In the western Atlantic the eastern Georges Bank haddock stock is jointly assessed on an annual basis by Canada and the United States and the stock is collaboratively managed through the Canada–United States Transboundary Management Guidance Committee, which was established in 2000. The commercial catch of haddock in North America was approximately 40–60 thousand tonnes per year between 1920 and 1960. This declined sharply in the late 1960s to between 5 and 30 thousand tonnes per year. Despite a few good years post-1970, landings have not returned to historical levels. Haddock currently resides on the Greenpeace seafood red list due to concerns regarding the impact of bottom trawls on the marine environment. In contrast, Monterey Bay Aquarium considers haddock a "good alternative". Many haddock fisheries have been certified as sustainable by the Marine Stewardship Council. All seven stocks assessed in the eastern Atlantic are currently considered by ICES to be harvested sustainably. The haddock populations in the western Atlantic (offshore grounds of Georges Bank off New England and Nova Scotia) are also considered to be harvested sustainably. As food Haddock is very popular as a food fish. It is sold fresh or preserved by smoking, freezing, drying, or to a small extent canning. Haddock, along with Atlantic cod and plaice, is one of the most popular fish used in British fish and chips. When fresh, the flesh of haddock is clean and white and its cooking is often similar to that of cod. A fresh haddock fillet will be firm and translucent and hold together well but less fresh fillets will become nearly opaque. Young, fresh haddock and cod fillets are often sold as scrod in Boston, Massachusetts; this refers to the size of the fish which have a variety of sizes, i.e., scrod, markets, and cows. Haddock is the predominant fish of choice in Scotland in a fish supper. It is also the main ingredient of Norwegian fishballs (fiskeboller). Unlike cod, haddock is not an appropriate fish for salting and preservation is more commonly effected by drying and smoking. The smoking of haddock was highly refined in Grimsby. Traditional Grimsby smoked fish (mainly haddock, but sometimes cod) is produced in the traditional smokehouses in Grimsby, which are mostly family-run businesses that have developed their skills over many generations. Grimsby fish market sources its haddock from the North East Atlantic, principally Iceland, Norway and the Faroe Islands. These fishing grounds are sustainably managed and have not seen the large scale depreciation in fish stocks seen in EU waters. One popular form of haddock is Finnan haddie which is named after the fishing village of Finnan or Findon in Scotland, where the fish was originally cold-smoked over smouldering peat. Finnan haddie is often poached in milk and served for breakfast. The town of Arbroath on the east coast of Scotland produces the Arbroath smokie. This is a hot-smoked haddock which requires no further cooking before eating. Smoked haddock is naturally an off-white colour and it is frequently dyed yellow, as are other smoked fish. Smoked haddock is the essential ingredient in the Anglo-Indian dish kedgeree, and also in the Scottish dish Cullen skink, a chowder-like soup.
Biology and health sciences
Acanthomorpha
null
313830
https://en.wikipedia.org/wiki/Bulldozer
Bulldozer
A bulldozer or dozer (also called a crawler) is a large, motorized machine equipped with a metal blade to the front for pushing material: soil, sand, snow, rubble, or rock during construction work. It travels most commonly on continuous tracks, though specialized models riding on large off-road tires are also produced. Its most popular accessory is a ripper, a large hook-like device mounted singly or in multiples in the rear to loosen dense materials. Bulldozers are used heavily in large and small scale construction, road building, minings and quarrying, on farms, in heavy industry factories, and in military applications in both peace and wartime. The word "bulldozer" refers only to a motorized unit fitted with a blade designed for pushing. The word is sometimes used inaccurately for other heavy equipment such as a front-end loader designed for carrying rather than pushing material. The term originally referred only to the blade attachment but is now commonly applied to any crawler tractor with a front mounted blade. Description Typically, bulldozers are large and powerful tracked heavy equipment. The tracks give them excellent traction and mobility through very rough terrain. Wide tracks also help distribute the vehicle's weight over a large area (decreasing ground pressure), thus preventing it from sinking in sandy or muddy ground. Extra-wide tracks are known as swamp tracks or low ground pressure (lgp) tracks. Bulldozers have transmission systems designed to take advantage of the track system and provide excellent tractive force. These traits allow bulldozers to excel in road building, construction, mining, forestry, land clearing, infrastructure development, and any other projects requiring highly mobile, powerful, and stable earth-moving equipment. A variant is the all-wheel-drive wheeled bulldozer, which generally has four large rubber-tired wheels, hydraulically operated articulated steering, and a hydraulically actuated blade mounted forward of the articulation joint. The bulldozer's primary tools are the blade and the ripper: Blade Bulldozer blades come in three types: straight ("S blade"), short with no lateral curve or side wings. Can be used for fine grading. universal ("U blade"), tall and very curved, with large side wings to maximize load. combination ("S-U", or semi-U), shorter, with less curvature and smaller side wings. It is typically used for pushing large rocks, as at a quarry. Blades can be fitted straight across the frame, or at an angle. All can be lifted, some, with additional hydraulic cylinders, can be tilted to vary the angle up to one side. Sometimes, a bulldozer is used to push or pull another piece of earth-moving equipment known as a "scraper" to increase productivity. The towed Fresno Scraper, invented in 1883 by James Porteous, was the first design to enable this to be done economically, removing the soil from an area being cut and depositing where needed as fill. Dozer blades with a reinforced center section for pushing are known as "bull blades". Dozer blades are added to combat engineering vehicles and other military equipment, such as artillery tractors such as the Type 73 or M8 tractor, to clear battlefield obstacles and prepare firing positions. Dozer blades may be mounted on main battle tanks to clear antitank obstacles or mines, and dig improvised shelters. Ripper A ripper is a long, claw-like shank that may be mounted singly or in multiples on the rear of a bulldozer to loosen hard and impacted materials. Usually a single shank is preferred for heavy ripping. The ripper is equipped with a replaceable tungsten steel alloy tip, known as a boot. Ripping can not only loosen soil (such as podzol hardpan) in agricultural and construction applications but break shaly rock or pavement into easily handled small rubble. A variant of the ripper is the stumpbuster, a single spike protruding horizontally used to split a tree stump. Variants Armored bulldozers Bulldozers employed for combat-engineering roles are often fitted with armor to protect the driver from firearms and debris, enabling bulldozers to operate in combat zones. The most widely documented use is the Israeli Military militarized Caterpillar D9, for earth moving, clearing terrain obstacles, opening routes, and detonating explosive charges. The IDF used armoured bulldozers extensively during Operation Rainbow where they were used to uproot Gaza Strip smuggling tunnels and destroy residential neighbourhoods, water wells and pipes, and agricultural land to expand the military buffer zone along the Philadelphi Route. This use drew criticism against both the use and the suppliers of armoured bulldozers from human-rights organizations such as the EWASH-coalition and Human Rights Watch, the latter of whom urged Caterpillar to cease their sale of bulldozers to the IDF. The use of bulldozers was seen as necessary by Israeli authorities to uproot smuggling tunnels, destroy houses used by Palestinian gunmen, and expand the buffer zone. Some forces' engineer doctrines differentiate between a low-mobility armoured dozer (LMAD) and a high-mobility armoured dozer (HMAD). The LMAD is dependent on a flatbed to move it to its employment site, whereas the HMAD has a more robust engine and drive system designed to give it road mobility with a moderate range and speed. HMADs, however, normally lack the full cross-country mobility characteristics of a dozer blade-equipped tank or armoured personnel carrier. Some bulldozers have been fitted with armor by civilian operators to prevent bystanders or police from interfering with the work performed by the bulldozer, as in the case of strikes or demolition of condemned buildings. This has also been done by civilians with a dispute with the authorities, such as Marvin Heemeyer, who outfitted his Komatsu D355A bulldozer with homemade composite armor to then demolish government buildings. Remote-controlled dozers In recent years, innovations in the construction technology have made remote-controlled bulldozers a reality. Now, heavy machinery can be controlled from up to 1,000 feet away. This contributes to the safety of workers on the jobsite, keeping them at a secure distance from potentially dangerous jobs. The advancement and the ability to control the heavy machinery from afar provides workers with the sufficient control over the dozers to get the job done. Though these machines are still in their early stages, many construction companies are using them successfully. History The first bulldozers were adapted from Holt farm tractors that were used to plough fields. The versatility of tractors in soft ground for logging and road building contributed to the development of the armored tank in World War I. In 1923, farmer James Cummings and draftsman J. Earl McLeod made the first designs for the bulldozer. A replica is on display at the city park in Morrowville, Kansas, where the two built the first bulldozer. On December 18, 1923, Cummings and McLeod filed U.S. patent #1,522,378 that was later issued on January 6, 1925, for an "Attachment for Tractors." By the 1920s, tracked vehicles became common, particularly the Caterpillar 60. Rubber-tired vehicles came into use in the 1940s. To dig canals, raise earthen dams, and do other earth-moving jobs, these tractors were equipped with a large, thick, metal plate in front. (The blade got its curved shape later). In some early models, the driver sat on top in the open without a cabin. The three main types of bulldozer blades are a U-blade for pushing and carrying soil relatively long distances, a straight blade for "knocking down" and spreading piles of soil, and a brush rake for removing brush and roots. These attachments (home-built or built by small equipment manufacturers of attachments for wheeled and crawler tractors and trucks) appeared by 1929. Widespread acceptance of the bull-grader does not seem to appear before the mid-1930s. The addition of power down-force provided by hydraulic cylinders instead of just the weight of the blade made them the preferred excavation machine for large and small contractors alike by the 1940s, by which time the term "bulldozer" referred to the entire machine and not just the attachment. Over the years, bulldozers got bigger and more powerful in response to the demand for equipment suited for ever larger earthworks. Firms such as Caterpillar, Komatsu, Clark Equipment Co, Case, Euclid, Allis Chalmers, Liebherr, LiuGong, Terex, Fiat-Allis, John Deere, Massey Ferguson, BEML, XGMA, and International Harvester manufactured large, tracked-type earthmoving machines. R.G. LeTourneau and Caterpillar manufactured large, rubber-tired bulldozers. Bulldozers grew more sophisticated as time passed. Improvements include drivetrains analogous to (in automobiles) an automatic transmission instead of a manual transmission, such as the early Euclid C-6 and TC-12 or Model C Tournadozer, blade movement controlled by hydraulic cylinders or electric motors instead of early models' cable winch/brake, and automatic grade control. Hydraulic cylinders enabled the application of down force, more precise manipulation of the blade, and automated controls. In the very snowy winter of 1946–47 in the United Kingdom, in at least one case a remote cut-off village running out of food was supplied by a bulldozer towing a big sled carrying necessary supplies. A more recent innovation is the outfitting of bulldozers with GPS technology, such as manufactured by Topcon Positioning Systems, Inc., Trimble Inc, or Leica Geosystems, for precise grade control and (potentially) "stakeless" construction. As a response to the many, and often varying claims about these systems, the Kellogg Report published in 2010 a detailed comparison of all the manufacturers' systems, evaluating more than 200 features for dozers alone. The best-known maker of bulldozers is Caterpillar. Komatsu, Liebherr, Case, Hitachi, Volvo, and John Deere are present-day competitors. Although these machines began as modified farm tractors, they became the mainstay for big civil construction projects, and found their way into use by military construction units worldwide. The best-known model, the Caterpillar D9, was also used to clear mines and demolish enemy structures. Manufacturers Industry statistics based on 2010 production published by Off-Highway Research showed Shantui was the largest producer of bulldozers, making over 10,000 units that year or two in five crawler-type dozers made in the world. The next-largest producer by number of units is Caterpillar Inc., which produced 6,400 units. Komatsu introduced the D575A in 1981, the D757A-2 in 1991, and the D575A-3 in 2002, which the company touts as the biggest bulldozer in the world. History of the word A 19th-century term used in engineering for a horizontal forging press Around 1870s: In the USA, a "bulldose" was a large dose (namely, one large enough to be literally or figuratively effective against a bull) of any sort of medicine or punishment. By the late 1870s, "to bulldoze" and "bulldozing" were being used throughout the United States to describe intimidation "by violent and unlawful means", which sometimes meant a severe whipping or coercion, or other intimidation, such as at gunpoint. It had a particular meaning in the Southern United States as a whipping or other punishment for African Americans to suppress black voter turnout in the 1876 United States presidential election. 1886: "Bulldozer" meant a large-caliber pistol and the person who wielded it. Late 19th century: "Bulldozing" meant using brute force to push over or through any obstacle, with reference to two bulls pushing against each other's heads in a fight over dominance. 1930s: applied to the vehicle These appeared as early as 1929, but were known as "bull grader" blades, and the term "bulldozer blade" did not appear to come into widespread use until the mid-1930s. "Bulldozer" now refers to the whole machine, not just the attachment. In contemporary usage, "bulldozer" is sometimes shortened to "dozer", and the verb "bulldozing" to "dozing", thus making a homophone with the pre-existing verb "dozing". Gallery
Technology
Specific-purpose transportation
null
313833
https://en.wikipedia.org/wiki/Swiss%20Army%20knife
Swiss Army knife
The Swiss Army knife (SAK; ) is a pocketknife, generally multi-tooled, now manufactured by Victorinox. The term "Swiss Army knife" was coined by American soldiers after World War II because they had trouble pronouncing the German word "", meaning "officer’s knife". The Swiss Army knife generally has a drop-point main blade plus other types of blades and tools, such as a screwdriver, a can opener, a saw blade, a pair of scissors, and many others. These are folded into the handle of the knife through a pivot point mechanism. The handle is traditionally a red colour, with either a Victorinox or Wenger "cross" logo or, for Swiss military issue knives, the coat of arms of Switzerland. Other colours, textures, and shapes have appeared over the years. Originating in Ibach, Switzerland, the Swiss Army knife was first produced in 1891 when the Karl Elsener company, which later became Victorinox, won the contract to produce the Swiss Army's Modell 1890 knife from the previous German manufacturer. In 1893, the Swiss cutlery company Paul Boéchat & Cie, which later became Wenger SA, received its first contract from the Swiss military to produce model 1890 knives; the two companies split the initial contract for provision of the knives and operated as separate enterprises from 1908. In 2005 Victorinox acquired Wenger. As an icon of the culture of Switzerland, both the design and the versatility of the knife have worldwide recognition. The term "Swiss Army knife" has acquired usage as a figure of speech indicating a multifaceted skillset. History Origins The Swiss Army Knife was not the first multi-use pocket knife. In 1851, in Moby-Dick (chapter 107), Herman Melville mentions the "Sheffield contrivances, assuming the exterior – though a little swelled – of a common pocket knife; but containing, not only blades of various sizes, but also screwdrivers, cork-screws, tweezers, bradawls, pens, rulers, nail files and countersinkers." During the late 1880s, the Swiss Army decided to purchase a new folding pocket knife for their soldiers. This knife was to be suitable for use by the army in opening canned food and for maintenance of the Swiss service rifle, the Schmidt–Rubin, which required a screwdriver for assembly and disassembly. In January 1891, the knife received the official designation Modell 1890. The knife had a blade, reamer, can opener, screwdriver, and grips made out of dark oak wood that some say was later partly replaced with ebony wood. At that time no Swiss company had the necessary production capacity, so the initial order for 15,000 knives was placed with the German knife manufacturer Wester & Co. from Solingen, Germany. These knives were delivered in October 1891. In 1891, Karl Elsener, then owner of a company that made surgical equipment, set out to manufacture the knives in Switzerland itself. At the end of 1891 Elsener began production of the Modell 1890 knives, in direct competition with the Solingen company. He incurred financial losses doing so, as Wester & Co was able to produce the knives at a lower cost. Elsener was on the verge of bankruptcy when, in 1896, he developed an improved knife, intended for the use by officers, with tools attached on both sides of the handle using a special spring mechanism, allowing him to use the same spring to hold them in place. This new knife was patented on 12 June 1897, with a second, smaller cutting blade, a corkscrew, and wood fibre grips, under the name of Schweizer Offiziers- und Sportmesser ("Swiss officer's and sports knife"). While the Swiss military did not commission the knife, it was successfully marketed internationally, restoring Elsener's company to prosperity. Elsener used a variation on the Swiss coat of arms to identify his knives beginning in 1909. With slight modifications, this is still the company logo. Also in 1909, on the death of his mother, Elsener used his mother's name Victoria, as a brand name, in her honour. In 1921 following the invention of stainless steel ( in French), Karl Elsener's son renamed the company to be Victorinox combining Victoria and inoxydable. In 1893 the second industrial cutler of Switzerland, Paul Boéchat & Cie, headquartered in Delémont in the French-speaking region of Jura, started selling a similar product. Its general manager, Théodore Wenger, acquired the company and renamed it the Wenger Company. Victorinox and Wenger In 1908 the Swiss government split the contract between Victorinox and Wenger, placing half the orders with each. By mutual agreement, Wenger advertised "the Genuine Swiss Army Knife" and Victorinox used the slogan, "the Original Swiss Army Knife". On 26 April 2005, Victorinox acquired Wenger, once again becoming the sole supplier of knives to the military of Switzerland. Victorinox at first kept the Wenger brand intact, but on 30 January 2013, the company announced that the Wenger brand of knives would be abandoned in favour of Victorinox. The press release stated that Wenger's factory in Delémont would continue to produce knives and all employees at this site will retain their jobs. They further elaborated that an assortment of items from the Wenger line-up will remain in production under the Victorinox brand name. Wenger's US headquarters will be merged with Victorinox's location in Monroe, Connecticut. Wenger's watch and licensing business will continue as a separate brand: SwissGear. Up until 2008 Victorinox AG and Wenger SA supplied about 50,000 knives to the military of Switzerland each year, and manufactured many more for export, mostly to the United States. Commercial knives can be distinguished by their cross logos; the Victorinox cross logo is surrounded by a shield while the Wenger cross logo is surrounded by a slightly rounded square. Victorinox registered the words "Swiss Army" and "Swiss Military" as a trademark in the US and was sued at Bern cantonal commercial court by the Swiss Confederacy (represented by Armasuisse, the authority representing the actual Swiss military), in October 2018. After an initial hearing Victorinox agreed to cede the registration in the United States of the term "Swiss military" to Armasuisse in return for an exclusive licence to market perfumes under the same name. Features, tools, and parts Tools and components There are various models of the Swiss Army knife with different tool combinations. Though Victorinox does not provide custom knives, they have produced many different variations to suit individual users, with the Wenger company producing even more model variations. Common main layer tools: Large blade - With 'VICTORINOX SWISS MADE' tang stamp on Victorinox blades since 2005 Small blade Nail file Scissors (sharpened to a 65° angle) Wood saw Metal file or metal saw with nail file Magnifying glass Phillips screwdriver Fish scaler / hook disgorger / ruler in cm and inches Pliers / wire cutter / wire crimper Can opener / 3 mm slot screwdriver Bottle opener / 6 mm slot screwdriver with wire stripper Other main layer tools: LED light USB flash drive Hoof cleaner Shackle opener / marlinspike Electrician's blade / wire scraper Pruning blade Pharmaceutical spatula (cuticle pusher) Cyber Tool (bit driver) Combination tool containing cap opener / can opener / 5 mm slot screwdriver with wire stripper Back layer tools: Corkscrew or Phillips driver Reamer Multipurpose hook with nail file 2mm slotted screwdriver Chisel Mini screwdriver (screws within the corkscrew) Keyring Scale tools: Tweezers Toothpick Pressurised ballpoint pen (with a retractable version on smaller models, which can be used to set DIP switches) Stainless steel pin Digital clock / alarm / timer / altimeter / thermometer / barometer Three Victorinox SAK models had a butane lighter: the SwissFlame, the CampFlame and the SwissChamp XXLT, first introduced in 2002 and discontinued in 2005. The models were never sold in the United States due to lack of safety features. They used a standard piezoelectric ignition system for easy ignition, with adjustable flame; they and were designed for operation at altitudes up to above sea level and continuous operation of 10 minutes. In January 2010, Victorinox announced the Presentation Master models, released in April 2010. The technological tools included a laser pointer, and detachable flash drive with fingerprint reader. Victorinox now sells an updated version called the Slim Jetsetter, with "a premium software package that provides ultra secure data encryption, automatic backup functionality, secure web surfing capabilities, file and email synchronization between the drive and multiple computers, Bluetooth pairing and much more. On the hardware side of things, biometric fingerprint technology, laser pointers, LED lights, Bluetooth remote control and of course, the original Swiss Army Knife implements – blade, scissors, nail file, screwdriver, key ring and ballpoint pen are standard. **Not every feature is available on every model within the collection." In 2006, Wenger produced a knife called "The Giant" that included every implement the company ever made, with 87 tools and 141 different functions. It was recognized by Guinness World Records as the world's most multifunctional penknife. It retails for about €798 or $US1000, though some vendors charge much higher prices. In the same year, Victorinox released the SwissChamp XAVT, consisting of 118 parts and 80 functions with a retail price of $425. The Guinness Book of Records recognizes a unique 314-blade Swiss Army-style knife made in 1991 by Master Cutler Hans Meister as the world's largest penknife, weighing . Locking mechanisms Some Swiss Army knives have locking blades to prevent accidental closure. Wenger was the first to offer a "PackLock" for the main blade on several of their standard 85mm models. Several large Wenger and Victorinox models have a locking blade secured by a slide lock that is operated with an unlocking-button integrated in the scales. Some Victorinox 111 mm series knives have a double liner lock that secures the cutting blade and large slotted screwdriver/cap opener/wire stripper combination tool designed towards prying. Design and materials Rivets and flanged bushings made from brass hold together all machined steel parts and other tools, separators and the scales. The rivets are made by cutting and pointing appropriately sized bars of solid brass. The separators between the tools have been made from aluminium alloy since 1951. This makes the knives lighter. Previously these separating layers were made of nickel-silver. The martensitic stainless steel alloy used for the cutting blades is optimized for high toughness and corrosion resistance and has a composition of 15% chromium, 0.60% silicon, 0.52% carbon, 0.50% molybdenum, and 0.45% manganese and is designated X55CrMo14 or DIN 1.4110 according to Victorinox. After a hardening process at 1040 °C and annealing at 160 °C the blades achieve an average hardness of 56 HRC. This steel hardness is suitable for practical use and easy resharpening, but less than achieved in stainless steel alloys used for blades optimized for high wear resistance. According to Victorinox the martensitic stainless steel alloy used for the other parts is X39Cr13 (aka DIN 1.4031, AISI/ASTM 420) and for the springs X20Cr13 (aka DIN 1.4021, but still within AISI/ASTM 420). The steel used for the wood saws, scissors and nail files has a steel hardness of HRC 53, the screwdrivers, tin openers and awls have a hardness of HRC 52, and the corkscrew and springs have a hardness of HRC 49. The metal saws and files, in addition to the special case hardening, are also subjected to a hard chromium plating process so that iron and steel can also be filed and cut. Although red Cellulose Acetate Butyrate (CAB) (generally known trade names are Cellidor, Tenite and Tenex) scaled Swiss Army knives are most common, there are many colors and alternative materials like more resilient nylon and aluminum for the scales available. Many textures, colors and shapes now appear in the Swiss Army Knife. Since 2006 the scales on some knife models can have textured rubber non-slip inlays incorporated, intended for sufficient grip with moist or wet hands. The rubber also provides some impact protection for such edged scales. Modifications have been made, including professionally produced custom models combining novel materials, colors, finishes and occasionally new tools such as firesteels or tool 'blades' mounting replaceable surgical scalpel blades to replacement of standard scales (handles) with new versions in natural materials such as buffalo horn. In addition to 'limited edition' productions runs, numerous examples from basic to professional-level customizations of standard knives—such as retrofitting pocket clips, one-off scales created using 3D printing techniques, decoration using anodization and new scale materials—can be found by searching for "SAK mods". Assembly During assembly, all components are placed on several brass rivets. The first components are generally an aluminium separator and a flat steel spring. Once a layer of tools is installed, another separator and spring are placed for the next layer of tools. This process is repeated until all the desired tool layers and the finishing separator are installed. Once the knife is built, the metal parts are fastened by adding brass flanged bushings to the rivets. The excess length of the rivets is then cut off to make them flush with the bushings. Finally, the remaining length of the rivets is flattened into the flanged bushings. After the assembly of the metal parts, the blades on smaller knives are sharpened to a 15° angle, resulting in a 30° V-shaped steel cutting edge. From sized knives the blades are sharpened to a 20° angle, resulting in a 40° V-shaped steel cutting edge. Chisel ground blades are sharpened to a 24° angle, resulting in a 24° asymmetric-shaped steel cutting edge where only one side is ground and the other is deburred and remains flat. The blades are then checked with a laser reflecting goniometer to verify the angle of the cutting edges. Finally, scales are applied. Slightly undersized holes incorporated into the inner surface enclose the bushings, which have truncated cone cross-section and are slightly undercut, forming a one-way interference fit when pressed into the generally softer and more elastic scale material. The result is a tight adhesive-free connection that nonetheless permits new identical-pattern scales to be quickly and easily applied. Sizes Victorinox models are available in , , , , , , and lengths when closed. The thickness of the knives varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Victorinox model line with as many as 15 layers. Wenger models are available in , , , , and lengths when closed. Thickness varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Wenger model line, with as many as 10 layers. Knives issued by the Swiss Armed Forces Since the first issue as personal equipment in 1891 the Soldatenmesser (Soldier Knives) issued by the Swiss Armed Forces have been revised several times. There are five different main Modelle (models). Their model numbers refer to the year of introduction in the military supply chain. Several main models have been revised over time and therefore exist in different Ausführungen (executions), also denoted by the year of introduction. The issued models of the Swiss Armed Forces are: Modell 1890 Modell 1890 Ausführung 1901 Modell 1908 Modell 1951 Modell 1951 Ausführung 1954 Modell 1951 Ausführung 1957 Modell 1961 Modell 1961 Ausführung 1965 Modell 1961 Ausführung 1978 Modell 1961 Ausführung 1994 Soldatenmesser 08 (Soldier Knife 08) Soldier Knives are issued to every recruit or member of the Swiss Armed Forces and the knives issued to officers have never differed from those issued to non-commissioned officers and privates. A model incorporating a corkscrew and scissors was produced as an officer's tool, but was deemed not "essential for survival". Officers were free to purchase it individually on their own account. Soldier knife model 1890 The Soldier Knife model 1890 had a spear point blade, reamer, can-opener, screwdriver and grips made out of oak wood scales (handles) that were treated with rapeseed oil for greater toughness and water-repellency, which made them black in color. The wooden grips of the Modell 1890 tended to crack and chip so in 1901 these were changed to a hard reddish-brown fiber similar in appearance to wood. The knife was long, thick and weighed . Soldier knife model 1908 The Soldier Knife model 1908 had a clip point blade rather than the 1890s spear point blade, still with the fiber scales, carbon steel tools, nickel-silver bolster, liners, and divider. The knife was long, thick and weighed . The contract with the Swiss Army split production equally between the Victorinox and Wenger companies. Soldier knife model 1951 The soldier Knife model 1951 had fiber scales, nickel-silver bolsters, liners, and divider, and a spear point blade. This was the first Swiss Armed Forces issue model where the tools were made of stainless steel. The screwdriver now had a scraper arc on one edge. The knife was long, thick and weighed . Soldier knife model 1961 The Soldier Knife model 1961 has a long knurled alox handle with the Swiss crest, a drop point blade, a reamer, a blade combining bottle opener, screwdriver, and wire stripper, and a combined can-opener and small screwdriver. The knife was thick and weighed The 1961 model also contains a brass spacer, which allows the knife, with the screwdriver and the reamer extended simultaneously, to be used to assemble the SIG 550 and SIG 510 assault rifles: the knife serves as a restraint to the firing pin during assembly of the lock. The Soldier Knife model 1961 was manufactured only by Victorinox and Wenger and was the first issued knife bearing the Swiss Coat of Arms on the handle. Soldier knife 08 In 2007 the Swiss Government made a request for new updated soldier knives for the Swiss military for distribution in late 2008. The evaluation phase of the new soldier knife began in February 2008, when Armasuisse issued an invitation to tender. A total of seven suppliers from Switzerland and other countries were invited to participate in the evaluation process. Functional models submitted by suppliers underwent practical testing by military personnel in July 2008, while laboratory tests were used to assess compliance with technical requirements. A cost-benefit analysis was conducted and the model with the best price/performance ratio was awarded the contract. The order for 75,000 soldier knives plus cases was worth . This equates to a purchase price of , , in October 2009 per knife plus case. Victorinox won the contest with a knife based on the One-Hand German Army Knife as issued by the German Bundeswehr and released in the civilian model lineup with the addition of a toothpick and tweezers stored in the nylon grip scales (side cover plates) as the One-Hand Trekker/Trailmaster model. Mass production of the new Soldatenmesser 08 (Soldier Knife 08) for the Swiss Armed Forces was started in December 2008, and first issued to the Swiss Armed Forces beginning with the first basic training sessions of 2009. The Soldier Knife 08 has an long ergonomic dual density handle with TPU rubbery thermoplastic elastomer non-slip inlays incorporated in the green Polyamide 6 grip shells and a double liner locking system, one-hand long locking partly wavy serrated chisel ground (optimized for right-handed use) drop point blade sharpened to a 24° angle, wood saw, can opener with small slotted screwdriver, locking bottle opener with large slotted screwdriver and wire stripper/bender, reamer sharpened to a 48° angle, Phillips (PH2) screwdriver and diameter split keyring. The Soldier Knife 08 width is , thickness is , overall length opened is and it weighs . The Soldier Knife 08 was not manufactured by Wenger. Knives issued by other militaries The armed forces of more than 20 different nations have issued or approved the use of various versions of Swiss army knives made by Victorinox, among them the forces of Germany, France, the Netherlands, Norway, Malaysia and the United States (NSN 1095-01-653-1166 Knife, Combat). Space program The Swiss Army knife has been present in space missions carried out by NASA since the late 1970s. In 1978, NASA sent a letter of confirmation to Victorinox regarding a purchase of 50 knives known as the Master Craftsman model. In 1985, Edward M. Payton, brother of astronaut Gary E. Payton, sent a letter to Victorinox, asking about getting a Master Craftsman knife after seeing the one his brother used in space. There are other stories of repairs conducted in space using a Swiss Army knife. Cultural impact The Swiss Army knife has been added to the collection of the Museum of Modern Art in New York and Munich's State Museum of Applied Art for its design. The term "Swiss Army" currently is a registered trademark owned by Victorinox AG and its subsidiary, Wenger SA. In both the original television series MacGyver as well as its 2016 reboot, character Angus MacGyver frequently uses different Swiss Army knives in various episodes to solve problems and construct simple objects. The term "Swiss Army knife" has entered popular culture as a metaphor for usefulness and adaptability. The multi-purpose nature of the tool has also inspired a number of other gadgets. A particularly large Wenger knife model, Wenger 16999, has inspired a large number of humorous reviews on Amazon. This model was recognized by Guinness World Records as 'The World's Most Multifunctional Penknife'. When U.S. District Court for the Southern District of California Roger Benitez overturned California's 30-year-old ban on assault weapons in Miller v. Bonta, he compared the Swiss Army knife to the AR-15 rifle in the first sentence of his opinion, "Like the Swiss Army Knife, the popular AR-15 rifle is a perfect combination of home defense weapon and homeland defense equipment." In response, California Governor Gavin Newsom stated that the comparison "completely undermines the credibility of this decision". Gallery
Technology
Knives
null
313925
https://en.wikipedia.org/wiki/Ichthyostega
Ichthyostega
Ichthyostega (from , 'fish' and , 'roof') is an extinct genus of limbed tetrapodomorphs from the Late Devonian of what is now Greenland. It was among the earliest four-limbed vertebrates ever in the fossil record and was one of the first with weight-bearing adaptations for terrestrial locomotion. Ichthyostega possessed lungs and limbs that helped it navigate through shallow water in swamps. Although Ichthyostega is often labelled a 'tetrapod' because of its limbs and fingers, it evolved long before true crown group tetrapods and could more accurately be referred to as a stegocephalian or stem tetrapod. Likewise, while undoubtedly of amphibian build and habit, it is not a true member of the group in the narrow sense, as the first modern amphibians (members of the group Lissamphibia) appeared in the Triassic Period. Until finds of other early stegocephalians and closely related fishes in the late 20th century, Ichthyostega stood alone as a transitional fossil between fish and tetrapods, combining fish and tetrapod features. Newer research has shown that it had an unusual anatomy, functioning more akin to a seal than a salamander, as previously assumed. History In 1932 Gunnar Säve-Söderbergh described four Ichthyostega species from the Late Devonian of East Greenland and one species belonging to the genus Ichthyostegopsis, I. wimani. These species could be synonymous (in which case only I. stensioei would remain), because their morphological differences are not very pronounced. The species differ in skull proportions, skull punctuation and skull bone patterns. The comparisons were done on 14 specimens collected in 1931 by the Danish East Greenland Expedition. Additional specimens were collected between 1933 and 1955. Description Ichthyostega was a fairly large animal for its time, as it was broadly built and about 1.5 m (4.9 ft) long. The skull was low, with dorsally placed eyes and large labyrinthodont teeth. The posterior margin of the skull formed an operculum covering the gills. The spiracle was situated in an otic notch behind each eye. Computed tomography has revealed that Ichthyostega had a specialized ear, including a stapes with a unique morphology compared to other tetrapods or to any fish hyomandibula. Postcranial skeleton The legs were large compared to contemporary relatives. It had seven digits on each hind leg, along with a peculiar, poorly ossified mass which lies anteriorly adjacent to the digits. The exact number of digits on the forelimb is not yet known, since fossils of the hand have not been found. While in water, the foot would have functioned like a fleshy paddle more than a fin.The vertebral column and ribcage of Ichthyostega was unusual and highly specialized relative to both its contemporaries and later tetrapods. The thoracic vertebrae at the front of the trunk and the short neck have tall neural spines that lean backwards. They attach to pointed ribs which increase in size and acquire prominent overlapping flanges. Past the sixth or seventh flanged rib, the ribs abruptly decrease in size and lose their flanges. The lumbar vertebrae at the back of the trunk have strong muscle scars and neural spines which are bent forwards and decrease in size towards the hips. The sacral vertebrae above the hips have fan-shaped neural spines that transition from forward-leaning to backward-leaning as they approach the tail. The vertebrae right behind the hips have unusually large ribs similar to those of the thoracic region. The caudal vertebrae have slender spines that lean backwards. The tail of Ichthyostega retained a low fin supported by bony lepidotrichia (fin rays). The tail fin was not as deep as in Acanthostega, and would have been less useful for swimming. Ichthyostega is related to Acanthostega gunnari, which is also from what is now East Greenland. Ichthyostega'''s skull seems more fish-like than that of Acanthostega, but had apelvic girdle morphology that seems stronger and better adapted to life on land. Ichthyostega also had more supportive ribs and stronger vertebrae with more developed zygapophyses. Whether or not these traits were independently evolved in Ichthyostega is debated. It does, however, show that Ichthyostega may have ventured onto land on occasions, unlike contemporaneous limbed vertebrates, such as Elginerpeton and Obruchevichthys. Classification Traditionally, Ichthyostega was considered part of an order named for it, the "Ichthyostegalia". However, this group represents a paraphyletic grade of primitive stem-tetrapods and is not used by many modern researchers. Phylogenetic analysis has shown Ichthyostega is intermediate between other primitive stegocephalian stem-tetrapods. The evolutionary tree of early stegocephalians below follows the results of one such analysis performed by Swartz in 2012. Paleobiology Early limbed vertebrates like Ichthyostega and Acanthostega differed from earlier tetrapodomorphs such as Eusthenopteron or Panderichthys in their increased adaptations for life on land. Though tetrapodomorphs possessed lungs, they used gills as their primary means of discharging carbon dioxide. Tetrapodomorphs used their bodies and tails for locomotion and their fins for steering and braking; Ichthyostega may have used its forelimbs for locomotion on land and its tail for swimming.Its massive ribcage was made up of overlapping ribs and the animal possessed a stronger skeletal structure, a largely fishlike spine, and forelimbs apparently powerful enough to pull the body from the water. These anatomical modifications may have been a result of selection to overcome the lack of buoyancy experienced on land. The hindlimbs were smaller than the forelimbs and unlikely to have borne full weight in an adult, while the broad, overlapping ribs would have inhibited side-to-side movements. The forelimbs had the required range of movement to push the body up and forward, probably allowing the animal to drag itself across flat land by synchronous (rather than alternate) "crutching" movements, much like that of a mudskipper or a seal. It was incapable of typical quadrupedal gaits as the forelimbs lacked the necessary rotary motion range.
Biology and health sciences
Prehistoric amphibians
Animals
314101
https://en.wikipedia.org/wiki/Ichthyosauria
Ichthyosauria
Ichthyosauria is an order of large extinct marine reptiles sometimes referred to as "ichthyosaurs", although the term is also used for wider clades in which the order resides. Ichthyosaurians thrived during much of the Mesozoic era; based on fossil evidence, they first appeared around 250 million years ago (Ma) and at least one species survived until about 90 million years ago, into the Late Cretaceous. During the Early Triassic epoch, ichthyosaurs and other ichthyosauromorphs evolved from a group of unidentified land reptiles that returned to the sea, in a development similar to how the mammalian land-dwelling ancestors of modern-day dolphins and whales returned to the sea millions of years later, which they gradually came to resemble in a case of convergent evolution. Ichthyosaurians were particularly abundant in the Late Triassic and Early Jurassic periods, until they were replaced as the top aquatic predators by another marine reptilian group, the Plesiosauria, in the later Jurassic and Early Cretaceous, though previous views of ichthyosaur decline during this period are probably overstated. Ichthyosaurians diversity declined due to environmental volatility caused by climatic upheavals in the early Late Cretaceous, becoming extinct around the Cenomanian-Turonian boundary approximately 90 million years ago. Scientists became aware of the existence of ichthyosaurians during the early 19th century, when the first complete skeletons were found in England. In 1834, the order Ichthyosauria was named. Later that century, many finely preserved ichthyosaurian fossils were discovered in Germany, including soft-tissue remains. Since the late 20th century, there has been a revived interest in the group, leading to an increased number of named ichthyosaurs from all continents, with over fifty genera known. Ichthyosaurian species varied from in length. Ichthyosaurians resembled both modern fish and dolphins. Their limbs had been fully transformed into flippers, which sometimes contained a very large number of digits and phalanges. At least some species possessed a dorsal fin. Their heads were pointed, and the jaws often were equipped with conical teeth to catch smaller prey. Some species had larger, bladed teeth to attack large animals. The eyes were very large, for deep diving. The neck was short, and later species had a rather stiff trunk. These also had a more vertical tail fin, used for a powerful propulsive stroke. The vertebral column, made of simplified disc-like vertebrae, continued into the lower lobe of the tail fin. Ichthyosaurians were air-breathing, warm-blooded, and bore live young. Many, if not all, species had a layer of blubber for insulation. Like other ancient marine reptiles, such as those in the clades Mosasauria and Plesiosauria, the genera in Ichthyosauria are not part of the clade Dinosauria. History of discoveries Early finds The first known illustrations of ichthyosaur bones, vertebrae, and limb elements were published by the Welshman Edward Lhuyd in his of 1699. Lhuyd thought that they represented fish remains. In 1708, the Swiss naturalist Johann Jakob Scheuchzer described two ichthyosaur vertebrae assuming they belonged to a man drowned in the Universal Deluge. In 1766, an ichthyosaur jaw with teeth was found at Weston near Bath. In 1783, this piece was exhibited by the Society for Promoting Natural History as those of a crocodilian. In 1779, ichthyosaur bones were illustrated in John Walcott's Descriptions and Figures of Petrifications. Towards the end of the eighteenth century, British fossil collections quickly increased in size. Those of the naturalists Ashton Lever and John Hunter were acquired in their totality by museums; later, it was established that they contained dozens of ichthyosaur bones and teeth. The bones had typically been labelled as belonging to fish, dolphins, or crocodiles; the teeth had been seen as those of sea lions. The demand by collectors led to more intense commercial digging activities. In the early nineteenth century, this resulted in the discovery of more complete skeletons. In 1804, Edward Donovan at St Donats uncovered a ichthyosaur specimen containing a jaw, vertebrae, ribs, and a shoulder girdle. It was considered to be a giant lizard. In October 1805, a newspaper article reported the find of two additional skeletons, one discovered at Weston by Jacob Wilkinson, the other, at the same village, by Reverend Peter Hawker. In 1807, the last specimen was described by the latter's cousin, Joseph Hawker. This specimen thus gained some fame among geologists as 'Hawker's Crocodile'. In 1810, near Stratford-upon-Avon, an ichthyosaur jaw was found that was combined with plesiosaur bones to obtain a more complete specimen, indicating that the distinctive nature of ichthyosaurs was not yet understood, awaiting the discovery of far better fossils. The first complete skeletons In 1811, in Lyme Regis, along the Jurassic Coast of Dorset, the first complete ichthyosaur skull was found by Joseph Anning, the brother of Mary Anning, who in 1812 while still a young girl, secured the torso of the same specimen. Their mother, Molly Anning, sold the combined piece to squire Henry Henley for £23. Henley lent the fossil to the London Museum of Natural History of William Bullock. When this museum was closed, the British Museum bought the fossil for a price of £47/5s; it still belongs to the collection of the independent Natural History Museum and has the inventory number NHMUK PV R1158 (formerly BMNH R.1158). It has been identified as a specimen of Temnodontosaurus platyodon. In 1814, the Annings' specimen was described by Professor Everard Home, in the first scientific publication dedicated to an ichthyosaur. Intrigued by the strange animal, Home tried to locate additional specimens in existing collections. In 1816, he described ichthyosaur fossils owned by William Buckland and James Johnson. In 1818, Home published data obtained by corresponding with naturalists all over Britain. In 1819, he wrote two articles about specimens found by Henry Thomas De la Beche and Thomas James Birch. A last publication of 1820 was dedicated to a discovery by Birch at Lyme Regis. The series of articles by Home covered the entire anatomy of ichthyosaurs, but highlighted details only; a systematic description was still lacking. Home was very uncertain how the animal should be classified. Though most individual skeletal elements looked very reptilian, the anatomy as a whole resembled that of a fish, so he initially assigned the creature to the fishes, as seemed to be confirmed by the flat shape of the vertebrae. At the same time, he considered it a transitional form between fishes and crocodiles, not in an evolutionary sense, but as regarded its place in the , the "Chain of Being" hierarchically connecting all living creatures. In 1818, Home noted some coincidental similarities between the coracoid of ichthyosaurians and the sternum of the platypus. This induced him to emphasize its status as a transitional form, combining, like the platypus, traits of several larger groups. In 1819, he considered it a form between newts, like the olm, and lizards; he then gave a formal generic name: Proteo-Saurus. However, in 1817, Karl Dietrich Eberhard Koenig had already referred to the animal as Ichthyosaurus, "fish saurian" from Greek , , "fish". This name at the time was an invalid and was only published by Koenig in 1825, but was adopted by De la Beche in 1819 in a lecture where he named three Ichthyosaurus species. This text would only be published in 1822, just after De la Beche's friend William Conybeare published a description of these species, together with a fourth one. The type species was Ichthyosaurus communis, based on a lost skeleton. Conybeare considered that Ichthyosaurus had priority relative to Proteosaurus. Although this is incorrect by modern standards, the latter name became a "forgotten" . In 1821, De la Beche and Conybeare provided the first systematic description of ichthyosaurs, comparing them to another newly identified marine reptile group, the Plesiosauria. Much of this description reflected the insights of their friend, the anatomist Joseph Pentland. In 1835, the order Ichthyosauria was named by Henri Marie Ducrotay de Blainville. In 1840, Richard Owen named an order Ichthyopterygia as an alternative concept. Popularisation during the 19th century The discovery of a hitherto unsuspected extinct group of large marine reptiles generated much publicity, capturing the imagination of both scientists and the public at large. People were fascinated by the strange build of the animals, especially the large scleral rings in the eye sockets, of which it was sometimes erroneously assumed these would have been visible on the living animal. Their bizarre form induced a feeling of alienation, allowing people to realise the immense span of time passed since the era in which the ichthyosaur swam the oceans. Not all were convinced that ichthyosaurs had gone extinct: Reverend George Young found a skeleton in 1819 at Whitby; in his 1821 description, he expressed the hope that living specimens could still be found. Geologist Charles Lyell, to the contrary, assumed that the Earth was eternal so that in the course of time the ichthyosaur might likely reappear, a possibility lampooned in a famous caricature by De la Beche. Public awareness was increased by the works of the eccentric collector Thomas Hawkins, a pre-Adamite believing that ichthyosaurs were monstrous creations by the devil: Memoirs of Ichthyosauri and Plesiosauri of 1834 and The Book of the Great Sea-Dragons of 1840. The first work was illustrated by mezzotints by John Samuelson Templeton. These publications also contained scientific descriptions and represented the first textbooks of the subject. In the summer of 1834, Hawkins, after a taxation by William Buckland and Gideon Mantell, sold his extensive collection, then the largest of its kind in the world, to the British Museum. However, curator Koenig quickly discovered that the fossils had been heavily restored with plaster, applied by an Italian artist from Lucca; of the most attractive piece, an Ichthyosaurus specimen, almost the entire tail was fake. It turned out that Professor Buckland had been aware of this beforehand, and the museum was forced to reach a settlement with Hawkins, and gave the fake parts a lighter colour to differentiate them from the authentic skeletal elements. Ichthyosaurs became even more popular in 1854 by the rebuilding at Sydenham Hill of the Crystal Palace, originally erected at the world exhibition of 1851. In the surrounding park, life-sized, painted, concrete statues of extinct animals were placed, which were designed by Benjamin Waterhouse Hawkins under the direction of Richard Owen. Among them were three models of an ichthyosaur. Although it was known that ichthyosaurs had been animals of the open seas, they were shown basking on the shore, a convention followed by many nineteenth century illustrations with the aim, as Conybeare once explained, of better exposing their build. This led to the misunderstanding that they really had an amphibious lifestyle. The pools in the park were at the time subjected to tidal changes, so that fluctuations in the water level at intervals submerged the ichthyosaur statues, adding a certain realism. Remarkably, internal skeletal structures, such as the scleral rings and the many phalanges of the flippers, were shown at the outside. Later 19th-century finds During the nineteenth century, the number of described ichthyosaur genera gradually increased. New finds allowed for a better understanding of their anatomy. Owen had noticed that many fossils showed a downward bend in the rear tail. At first, he explained this as a post mortem effect, a tendon pulling the tail end downwards after death. However, after an article on the subject by Philip Grey Egerton, Owen considered the possibility that the oblique section could have supported the lower lobe of a tail fin. This hypothesis was confirmed by new finds from Germany. In the Posidonia Shale at Holzmaden, dating from the early Jurassic, already in the early nineteenth century, the first ichthyosaur skeletons had been found. During the latter half of the century, the rate of discovery increased to a few hundred each year. Ultimately, over four thousand were uncovered, forming the bulk of ichthyosaur specimens displayed. The sites were also a , meaning not only the quantity, but also the quality was exceptional. The skeletons were very complete and often preserved soft tissues, including tail and dorsal fins. Additionally, female individuals were discovered with embryos. 20th century In the early twentieth century, ichthyosaur research was dominated by the German paleontologist Friedrich von Huene, who wrote an extensive series of articles, taking advantage of an easy access to the many specimens found in his country. The amount of anatomical data was hereby vastly increased. Von Huene also travelled widely abroad, describing many fossils from locations outside of Europe. During the 20th century, North America became an important source of new fossils. In 1905, the Saurian Expedition led by John Campbell Merriam and financed by Annie Montague Alexander, found twenty-five specimens in central Nevada, which were under a shallow ocean during the Triassic. Several of these are in the collection of the University of California Museum of Paleontology. After a slack during the middle of the century, with no new genera being named between the 1930s and the 1970s, the rate of discoveries picked up towards its end. Other specimens are embedded in the rock and visible at Berlin–Ichthyosaur State Park in Nye County. In 1977 the Triassic ichthyosaur Shonisaurus became the state fossil of Nevada. About half of the ichthyosaur genera determined to be valid were described after 1990. In 1992 Canadian paleontologist Elizabeth Nicholls uncovered the largest known specimen, a Shastasaurus. The new finds have allowed a gradual improvement in knowledge about the anatomy and physiology of what had already been seen as rather advanced "Mesozoic dolphins". Christopher McGowan published a larger number of articles and also brought the group to the attention of the general public. The new method of cladistics provided a means to exactly calculate the relationships between groups of animals, and in 1999, Ryosuke Motani published the first extensive study on ichthyosaur phylogenetics. 21st century In 2003, McGowan and Motani published the first modern textbook on the Ichthyosauria and their closest relatives. Two jawbones of gigantic ichthyosaur were discovered in 2016 and 2020 in Lilstock and Somerset respectively, UK. Simple scaling would suggest that this ichthyosaur has an estimated total length of up to 26 meters (82 feet), the largest known to date marine reptile. The fossils of this individual were dated to be 202 million-year-old. Evolutionary history Origin The origin of the ichthyosaurs is contentious. Until recently, clear transitional forms with land-dwelling vertebrate groups had not yet been found, the earliest known species of the ichthyosaur lineage being already fully aquatic. In 2014, a small basal ichthyosauriform from the upper Lower Triassic was described that had been discovered in China with characteristics suggesting an amphibious lifestyle. In 1937, Friedrich von Huene even hypothesised that ichthyosaurs were not reptiles, but instead represented a lineage separately developed from amphibians. Today, this notion has been discarded and a consensus exists that ichthyosaurs are amniote tetrapods, having descended from terrestrial egg-laying amniotes during the late Permian or the earliest Triassic. However, establishing their position within the amniote evolutionary tree has proven difficult, due to their heavily derived morphology obscuring their ancestry. Several conflicting hypotheses have been posited on the subject. In the second half of the 20th century, ichthyosaurs were usually assumed to be of the Anapsida, seen as an early branch of "primitive" reptiles. This would explain the early appearance of ichthyosaurs in the fossil record, and also their lack of clear affinities with other reptile groups, as anapsids were supposed to be little specialised. This hypothesis has become unpopular for being inherently vague because Anapsida is an unnatural, paraphyletic group. Modern exact quantitative cladistic analyses consistently indicate that ichthyosaurs are members of the clade Diapsida. Some studies showed a basal, or low, position in the diapsid tree. More analyses result in their being Neodiapsida, a derived diapsid subgroup. Since the 1980s, a close relationship was assumed between the Ichthyosauria and the Sauropterygia, another marine reptile group, within an overarching Euryapsida, with one such study in 1997 by John Merck showing them to be monophyletic archosauromorph euryapsids. This has been contested over the years, with the Euryapsida being seen as an unnatural polyphyletic assemblage of reptiles that happen to share some adaptations to a swimming lifestyle. However, more recent studies have shown further support for a monophyletic clade between Ichthyosauromorpha, Sauropterygia, and Thalattosauria as a massive marine clade of aquatic archosauromorphs originating in the Late Permian and diversifying in the Early Triassic. Affinity with the Hupehsuchia Since 1959, a second enigmatic group of ancient sea reptiles is known, the Hupehsuchia. Like the Ichthyopterygia, the Hupehsuchia have pointed snouts and show polydactyly, the possession of more than five fingers or toes. Their limbs more resemble those of land animals, making them appear as a transitional form between these and ichthyosaurs. Initially, this possibility was largely neglected because the Hupehsuchia have a fundamentally different form of propulsion, with an extremely stiffened trunk. The similarities were explained as a case of convergent evolution. Furthermore, the descent of the Hupehsuchia is no less obscure, meaning a possible close relationship would hardly clarify the general evolutionary position of the ichthyosaurs. In 2014, Cartorhynchus was announced, a small species with a short snout, large flippers, and a stiff trunk. Its lifestyle might have been amphibious. Motani found it to be more basal than the Ichthyopterygia and named an encompassing clade Ichthyosauriformes. The latter group was combined with the Hupesuchia into the Ichthyosauromorpha. The ichthyosauromorphs were found to be diapsids. The proposed relationships are shown by this cladogram: Early Ichthyopterygia The earliest ichthyosaurs are known from the Early and Early-Middle (Olenekian and Anisian) Triassic strata of Canada, China, Japan, and Spitsbergen in Norway, being up to 246 million years old. These first forms included the genera Chaohusaurus, Grippia, and Utatsusaurus. Even older fossils show they were around 250 million years ago, just two million years after the Permian mass extinction. This early diversity suggests an even earlier origin, possibly late Permian. They more resembled finned lizards than the fishes or dolphins to which the later, more familiar species were similar. Their bodies were elongated and they probably used an anguilliform locomotion, swimming by undulations of the entire trunk. Like land animals, their pectoral girdles and pelves were robustly built, and their vertebrae still possessed the usual interlocking processes to support the body against the force of gravity. However, they were already rather advanced in having limbs that had been completely transformed into flippers. They also were probably warm-blooded and viviparous. These very early "proto-ichthyosaurs" had such a distinctive build compared to "ichthyosaurs proper" that Motani excluded them from the Ichthyosauria and placed them in a basal position in a larger clade, the Ichthyopterygia. However, this solution was not adopted by all researchers. Later Triassic forms The basal forms quickly gave rise to ichthyosaurs in the narrow sense sometime around the boundary between the Early Triassic and Middle Triassic; the earliest Ichthyosauria in the sense Motani gave to the concept, appear about 245 million years ago. These later diversified into a variety of forms, including the still sea serpent-like Cymbospondylus, a problematic form which reached ten metres in length, and smaller, more typical forms like Mixosaurus. The Mixosauria were already very fish-like with a pointed skull, a shorter trunk, a more vertical tail fin, a dorsal fin, and short flippers containing many phalanges. The sister group of the Mixosauria were the more advanced Merriamosauria. By the Late Triassic, merriamosaurs consisted of both the large, classic Shastasauria and more advanced, "dolphin-like" Euichthyosauria. Experts disagree over whether these represent an evolutionary continuum, with the less specialised shastosaurs a paraphyletic grade that was evolving into the more advanced forms, or whether the two were separate clades that evolved from a common ancestor earlier on. Euichthyosauria possessed more narrow front flippers, with a reduced number of fingers. Basal euichthyosaurs were Californosaurus and Toretocnemus. A more derived branch were the Parvipelvia, with a reduced pelvis, basal forms of which are Hudsonelpidia and Macgowania. During the Carnian and Norian, Shastosauria reached huge sizes. Shonisaurus popularis, known from a number of specimens from the Carnian of Nevada, was long. Norian Shonisauridae are known from both sides of the Pacific. Himalayasaurus tibetensis and Tibetosaurus (probably a synonym) have been found in Tibet. These large (10- to 15-m-long) ichthyosaurs have by some been placed into the genus Shonisaurus. The gigantic Shonisaurus sikanniensis (considered as a shastasaurus between 2011 and 2013) whose remains were found in the Pardonet Formation of British Columbia, has been estimated to be as much as in length. Ichthyotitan, found in Somerset, has been estimated to be as much as 26 m long—if correct, the largest marine reptile known to date. In the Late Triassic, ichthyosaurs attained the peak of their size and diversity. They occupied many ecological niches. Some were apex predators; others were hunters of small prey. Several species perhaps specialised in suction feeding or were ram feeders; also, durophagous forms are known. Towards the end of the Late Triassic, a decline of variability seems to have occurred. The giant species seemed to have disappeared at the end of the Norian. Rhaetian (latest Triassic) ichthyosaurs are known from England, and these are very similar to those of the Early Jurassic. A possible explanation is an increased competition by sharks, Teleostei, and the first Plesiosauria. Like the dinosaurs, the ichthyosaurs and their contemporaries, the plesiosaurs, survived the Triassic–Jurassic extinction event, and quickly diversified again to fill the vacant ecological niches of the early Jurassic. Jurassic During the Early Jurassic, the ichthyosaurs still showed a large variety of species, ranging from in length. Many well-preserved specimens from England and Germany date to this time and well-known genera include Eurhinosaurus, Ichthyosaurus, Leptonectes, Stenopterygius, and the large predator Temnodontosaurus. More basal parvipelvians like Suevoleviathan were also present. The general morphological variability had been strongly reduced, however. Giant forms, suction feeders and durophagous species were absent. Many of these genera possessed streamlined, dolphin-like thunniform bodies, although more basal clades like Eurhinosauria, which include Leptonectes and Eurhinosaurus, had longer bodies and long snouts. Few ichthyosaur fossils are known from the Middle Jurassic. This might be a result of the poor fossil record in general of this epoch. The strata of the Late Jurassic seem to indicate that a further decrease in diversity had taken place. From the Middle Jurassic onwards, almost all ichthyosaurs belonged to the thunnosaurian clade Ophthalmosauridae. Represented by the Ophthalmosaurus and related genera, they were very similar in general build to Ichthyosaurus. The eyes of Ophthalmosaurus were huge, and these animals likely hunted in dim and deep water. However, new finds from the Cretaceous indicate that ichthyosaur diversity in the Late Jurassic must have been underestimated. Cretaceous Traditionally, ichthyosaurs were seen as decreasing in diversity even further with the Cretaceous, though they had a worldwide distribution. All fossils from this period were referred to a single genus: Platypterygius. This last ichthyosaur genus was thought to have become extinct early in the late Cretaceous, during the Cenomanian about 95 million years ago, much earlier than other large Mesozoic reptile groups that survived until the very end of the Cretaceous. Two major explanations have been proposed for this extinction including either chance or competition from other large marine predators such as plesiosaurs. The overspecialisation of ichthyosaurs may be a contributing factor to their extinction, possibly being unable to 'keep up' with fast teleost fish, which had become dominant at this time, against which the sit-and-wait ambush strategies of the mosasauroids proved superior. This model thus emphasised evolutionary stagnation, the only innovation shown by Platypterygius being its ten fingers. Recent studies, however, show that ichthyosaurs were actually far more diverse in the Cretaceous than previously thought. Fragments previously referred to "Platypterygius" have been found to be from several different taxa. As of 2012, at least eight lineages are known to have spanned the Jurassic-Cretaceous boundary including Acamptonectes, Sveltonectes, Caypullisaurus, and Maiaspondylus. In 2013, a Cretaceous basal thunnosaurian was revealed: Malawania. Indeed, likely a radiation during the Early Cretaceous occurred due to an increase of coastlines when the continents further broke up. The demise of the ichthyosaurs has been described as a two-step process. A first extinction event in the beginning of the Cenomanian eliminated two of the three ichthyosaur feeding guilds still present: the 'soft-prey specialists' and the 'generalists', leaving only an unspecialized apex predator group. The second extinction event took place during the Cenomanian-Turonian boundary event, a marine 'anoxic event', after which just a single lineage survived, Platypterygius hercynicus, which then disappeared about 93 million years ago. Ichthyosaur extinction was thus a pair of abrupt events rather than a long decline, probably related to the environmental upheavals and climatic changes in the Cenomanian and Turonian. Competition with early mosasaurs is unlikely to have been a contributing factor since large mosasaurs did not appear until 3 million years after the ichthyosaur extinction, filling the resulting ecological void left by the extinction of ichthyosaurs. Plesiosaurian polycoltylids perhaps also filled some of the niches previously occupied by ichthyosaurs, although they had coexisted for 19 million years. The extinction was most likely the result of ecological change and volatility that caused changes in migration, food availability, and birthing grounds. This part of the Cretaceous was one in which many other marine extinctions occurred, including those of some types of microplankton, ammonites, belemnites, and reef-building bivalves. Phylogeny In modern phylogeny, clades are defined that contain all species forming a certain branch of the evolutionary tree. This also allows one to clearly indicate all relationships between the several subgroups in a cladogram. In 1999, a node clade Ichthyopterygia was defined by Motani as the group consisting of the last common ancestor of Ichthyosaurus communis, Utatsusaurus hataii and Parvinatator wapitiensis; and all its descendants. Within Motani's phylogeny, the Ichthyopterygia were the larger parent clade of a smaller stem clade Ichthyosauria that was defined as the group consisting of Ichthyosaurus communis and all species more closely related to Ichthyosaurus than to Grippia longirostris. Motani's concept of the Ichthyosauria was thus more limited than the traditional one that also contained basal forms, such as Grippia, Utatsusaurus, and Parvinatator. The following cladogram is based on Motani (1999): An alternative terminology was proposed by Maisch & Matzke in 2000, trying to preserve the traditional, more encompassing content of the concept Ichthyosauria. They defined a node clade Ichthyosauria as the group consisting of the last common ancestor of Thaisaurus chonglakmanii, Utatsusaurus hataii, and Ophthalmosaurus icenicus, and all its descendants. Ichthyosauria sensu Motani might materially be identical to a clade that Maisch & Matzke in 2000 called Hueneosauria, depending on the actual relationships. Cladogram based on Maisch and Matzke (2000) and Maisch and Matzke (2003) with clade names following Maisch (2010): Description Size Ichthyosaurs averaged about in length. Some individual specimens were as short as ; some species were much larger: the Triassic Shonisaurus popularis was about long and in 2004 Shonisaurus sikanniensis (classified as a shastasaurus between 2011 and 2013) was estimated to have been in length. Fragmentary finds suggest the presence of a form in the early Jurassic. In 2018, lower jaw fragments from England were reported indicating a length of between 20 and 25 m (66 to 82 ft), which have been recently described as Ichthyotitan severnensis. According to weight estimates by Ryosuke Motani a Stenopterygius weighed around , whilst a Ophthalmosaurus icenicus weighed . General build While the earliest known members of the ichthyosaur lineage were more eel-like in build, later ichthyosaurs resembled more typical fishes or dolphins, having a dolphin-like head with a short neck and a long snout. Ichthyosaur fore and hind limbs had been fully transformed into flippers. Some species had a dorsal fin on their backs and a more or less vertical caudal fluke at the rear of a rather short tail. Although ichthyosaurs looked like fish, they were not. Evolutionary biologist Stephen Jay Gould said that the ichthyosaur was his favourite example of convergent evolution, where similarities of structure are analogous, not homologous, thus not caused by a common descent, but by a similar adaptation to an identical environment:This sea-going reptile with terrestrial ancestors converged so strongly on fishes that it actually evolved a dorsal fin and tail in just the right place and with just the right hydrological design. These structures are all the more remarkable because they evolved from nothing—the ancestral terrestrial reptile had no hump on its back or blade on its tail to serve as a precursor. Diagnostic traits Derived ichthyosaurs in the narrow sense, as defined by Motani in 1999, differ from their closest basal ichthyopterygian relatives in certain traits. Motani listed a number of these. The external nostril is located on the side of the skull, and is hardly visible from above. The upper rim of the eye socket consists of a bone bar formed by the prefrontal and the postfrontal bones. The postorbital in side view is excluded from the supratemporal fenestra. The opening for the parietal eye is located on the border of the parietal and the frontal bone. The lateral wing of the pterygoid is incompletely and variably ossified. The ulna lacks the part behind the original shaft axis. The rear dorsal vertebrae are disc-shaped. Skeleton Skull Basal Ichthyopterygia already had elongated, triangular skulls. With ichthyosaurs in the narrow sense, their snouts became very pointy. The snout is formed by the premaxilla. The maxilla behind it is usually shorter and sometimes excluded from the external nostril by the rear branch of the premaxilla. Accordingly, the number of premaxillary teeth is high, while the maxillary teeth are fewer in number or even completely absent. The rear top of the snout is formed by the nasal bones. Derived species have a foramen internasale, a midline opening separating the rear of the nasal bones. The nasal bone usually forms the top and front rim of the bony nostril, itself often placed just in front of the eye socket. However, with some Triassic species, the premaxilla is so strongly extended at its back that it even excludes the nasal from the nostril. The rear of the skull is dominated by a large eye socket, often covering the major part of the rear side surface. In the socket, a large scleral ring is present; this is a circular structure of small, overlapping bone segments protecting the eye against the water pressure. Both in the relative and absolute senses, ichthyosaurs have the largest eye sockets of all known vertebrates. The other rear skull elements are typically so compressed and fused that they are difficult to identify. The top rear element of the skull was usually assumed to be the supratemporal bone, while the squamosal and quadratojugal were sometimes fused. However, in 1968, Alfred Sherwood Romer stated that the presumed supratemporal was in fact the squamosal, an interpretation which was supported by McGowan in 1973. In 1990, though, John Steve Massare convinced most researchers that the original identification had been the correct one after all. The supratemporal forms the rear rim of the supratemporal opening; a lower temporal opening at the side is lacking. The front rim of the supratemporal opening is typically formed by the postfrontal; only with the very basal Utatsusaurus the postorbital and the squamosal still reach the edge. Between the paired supratemporal openings, the skull roof is narrow; some species have a longitudinal crest on it as an attachment for the jaw muscles. Basal Ichthyopterygia have a parietal eye opening between the paired parietal bones. With ichthyosaurs proper, this opening moves to the front, first to the border between the parietals and the frontals and ultimately between the frontals, a condition shown by derived species. Postparietal and tabular bones are lacking. Often, the bones of the back of the skull and the palate are incompletely ossified, apparently having partly remained cartilage. The occipital condyle is typically very convex. The stapes, the bone transmitting sound waves from the eardrum to the middle ear, is elongated and not pierced by a foramen. Pterygoid teeth are typically lacking. Lower jaws Like the snout, the lower jaws are elongated. However, in some species, such as Eurhinosaurus and Excalibosaurus, the front of the snout far protrudes beyond the lower jaws. While the front of the lower jaw is typically low, its rear depth is very variable. The greater part of the lower jaw is formed by the front dentary, the tooth-bearing bone. At its inner side the dentary is covered by a splenial that extends forwards until the symphysis, the common contact surface where both lower jaws are grown together. The jaw joints do not allow a horizontal chewing movement: they function as simple hinges to vertically open or close the jaws. Teeth Ichthyosaur teeth are typically conical. Fish-eating species have long and slender tooth crowns that are slightly recurved. Forms specialised in catching larger prey have shorter, broader, and straighter teeth; sometimes, cutting edges are present. Thalattoarchon, an apex predator, had larger teeth formed like flattened blades. Durophagous species that ate shellfish have low, convex teeth that are closely packed. Many ichthyosaur dentitions are heterodont, combining several tooth shapes, e.g. small teeth in the front and larger teeth at the rear. The teeth are usually placed in tooth sockets; derived species possess a common tooth groove, a condition known as aulacodonty. In the latter case, adult individuals sometimes become toothless. Teeth in tooth sockets sometimes fuse with the jawbone. With ichthyosaur teeth, the dentine shows prominent vertical wrinkles. Durophagous forms have teeth with deep vertical grooves and wrinkles in the enamel. Postcrania Vertebral column Basal Ichthyopterygia, like their land-dwelling ancestors, still had vertebrae that possessed a full set of processes that allowed them to interlock and articulate, forming a vertebral column supporting the weight of the body. As ichthyosaurs were fully aquatic, their bodies were supported by the Archimedes force exerted by the water; in other words, they were buoyant. Therefore, the vertebral processes had lost much of their function. Early ichthyosaurs proper had rear dorsal vertebrae that had become disc-shaped, like those of typical fishes. With more derived species, the front dorsals also became discs. Gradually, most processes were lost, including those for rib attachment. The vertebral bodies became much shorter. The front and rear sides of the discs were hollowed out, resulting in a so-called amphicoelous condition. A transverse cross-section of such a vertebra has an hourglass shape. This morphology is unique within the Amniota and makes discerning ichthyosaur vertebrae from those of other marine reptiles easy. The only process that kept its function was the spine at the top, serving as an attachment for the dorsal muscles. However, even the spine became a simple structure. The neural arch, of which it was an outgrowth, typically no longer fused to the vertebral centre. The neck is short, and derived species show a reduction in the number of cervical vertebrae. The short neck positions the skull close to the trunk, usually in a slight oblique elevation to it. Derived species usually also have a reduced number of dorsals, the total of presacral vertebrae totalling about forty to fifty. The vertebral column is little differentiated. Basal Ichthyopterygia still have two sacral vertebrae, but these are not fused. Early Triassic forms have a transversely flattened tail base with high spines for an undulating tail movement. Derived forms have a shorter tail with the characteristic kink at the end; a section of wedge-shaped vertebrae, itself supporting the fleshy upper tail fin lobe, forced the tail end into the lower fin lobe. As derived species no longer have transversal processes on their vertebrae—again a condition unique in the Amniota—the parapophyseal and diapophysael rib joints have been reduced to flat facets, at least one of which is located on the vertebral body. The number of facets can be one or two; their profile can be circular or oval. Their shape often differs according to the position of the vertebra within the column. The presence of two facets per side does not imply that the rib itself is double-headed: often, even in that case, it has a single head. The ribs typically are very thin and possess a longitudinal groove on both the inner and the outer sides. The lower side of the chest is formed by gastralia. These belly ribs have a single centre segment and one or two outer segments per side. They are not fused into a real plastron. Usually two gastralia are present per dorsal rib. Appendicular skeleton The shoulder girdle of ichthyosaurs is not much modified from its original condition. Some basal forms show a hatchet- or crescent-shaped shoulder blade or scapula; derived forms have an elongated blade positioned on a broader base. The scapula is not fused with the coracoid into a scapulocoracoid, indicating that the forces exerted on the shoulder girdle were moderate. The shoulder joint is positioned on the border between the scapula and the coracoid. Both coracoids are fused on their common midline. The coracoid shape is very variable, but usually it is rather low. The upper part of the shoulder girdle is formed by two long and slender clavicles, crowned by a central interclavicular bone that is large and triangular with basal forms, small and T-shaped in Jurassic species. Breast bones or sterna are absent. Basal forms have a forelimb that is still functionally differentiated, in some details resembling the arm of their land-dwelling forebears; the ulna and radius are elongated and somewhat separated; the carpals are rounded, allowing the wrist to rotate; the number of phalanges is within the range shown by land animals. Ichthyosaurs proper, to the contrary, have a forelimb that is fully adapted to its function as a flipper. However, the adaptations are very variable. Triassic species typically have a very derived humerus, changed into a disc. Jurassic species tend to have a more elongated humeral form with a rounded head, narrow shaft, and expanded lower end. The radius and ulna are always strongly flattened, but can be circular, with or without notch, or have a waist. Notches can be homologous to the original shafts, but also be newly formed. Jurassic forms no longer have a space, the spatium interosseum, between the radius and ulna. Often, the latter bones gradually merge into lower, disc-shaped elements - the up to four carpals which again differ little in form from the up to five metacarpals. A strongly derived condition show the phalanges, small, disc-shaped elements positioned in long rows. Sometimes, the number of fingers is reduced, to as low as two. This is a rather common phenomenon within the Tetrapoda. Unique, however, for derived tetrapods, is the fact that some species show nonpathological polydactyly, the number of fingers being higher than five. Some species had ten fingers per hand (eg, Caypullisaurus). These fingers, again, can have an increased number of phalanges, up to thirty, a phenomenon called hyperphalangy, also known from the Plesiosauria, mosasaurs, and the Cetacea. The high number of elements allows the flipper to be shaped as a hydrofoil. When a high number of fingers is present, their identity is difficult to determine. It is usually assumed that fingers were added at both the front and at the rear, perhaps to a core of four original fingers. If fingers are added, often the number of metacarpals and carpals is also increased; sometimes even an extra lower arm element is present. Earlier, ichthyosaurs were commonly divided into "longipinnate" and "latipinnate" forms, according to the long or wide shape of the front flippers, but recent research has shown that these are not natural groups; ichthyosaur clades often contain species with and without elongated forelimbs. The ichthyosaur pelvis is typically rather reduced. The three pelvic bones: the ilium, the ischium, and the pubic bone, are not fused and often do not even touch each other. Also, the left and right pelvic sides no longer touch; only basal forms still have sacral ribs connecting the ilia to the vertebral column. The hip joint is not closed on the inside. The pubic bone typically does not connect to the ischium behind it; the space in between is by some workers identified as the fenestra thyreoidea; other researchers deny that the term is applicable given the general loose structure of the pelvis. Some later species have a connected pubic bone and ischium, but in this case, the femoral head no longer articulates with the hip joint. Triassic species have plate-like pubic bones and ischia; in later species these elements become elongated with a narrow shaft and can form a single rod. Typically, the hindlimbs are shorter than the forelimbs, possessing a lesser number of elements. Often, the rear flipper is only half the length of the front flipper. The thighbone is short and broad, often with a narrow waist and an expanded lower end. The tibia, fibula and metatarsals are merged into a mosaic of bone discs supporting the hydrofoil. Three to six toes are present. The toe phalanges also show hyperphalangy; exceptionally, Ophthalmosaurus shows a reduced number of phalanges. Soft tissue The earliest reconstructions of ichthyosaurs all omitted dorsal fins and caudal (tail) flukes, which were not supported by any hard skeletal structure, so were not preserved in many fossils. Only the lower tail lobe is supported by the vertebral column. In the early 1880s, the first body outlines of ichthyosaurs were discovered. In 1881, Richard Owen reported ichthyosaur body outlines showing tail flukes from Lower Jurassic rocks in Barrow-upon-Soar, England. Other well-preserved specimens have since shown that in some more primitive ichthyosaurs, like a specimen of Chaohusaurus geishanensis, the tail fluke was weakly developed and only had a dorsal tail lobe, making the tail more paddle-like. Over the years, the visibility of the tail lobe has faded away in this specimen. The presence of dorsal fins in ichthyosaurs has been controversial. Finely preserved specimens from the Holzmaden Lagerstätten in Germany found in the late 19th century revealed additional traces, usually preserved in black, of the outline of the entire body, including the first evidence of dorsal fins in ichthyosaurs. Unique conditions permitted the preservation of these outlines, which probably consist of bacterial mats, not the remains of the original tissues themselves. In 1987, David Martill argued that, given the indirect method of conservation by bacteria, these outlines were unlikely to have been reliably preserved in any fine detail. He concluded that no authentic dorsal fins had been discovered. After displaced skins flaps from the body would have initially been misinterpreted as fins, fossil preparators later came to expect such fins to be present, and would have identified any discolouration in the appropriate position as a dorsal fin or even have falsified such structures. The lack of a dorsal fin would also explain why ichthyosaurs, contrary to porpoises, retained hind flippers, as these were needed for stability. Other researchers noted that, while the outlines might have been sharpened and smoothed by preparators because fossil bacterial mats usually have indistinct edges, many of the preserved dorsal fins were probably authentic and at least somewhat close to the true body outline. At least one specimen, R158 (in the collections of the Paleontologiska Museet, Uppsala University), shows the expected faded edges of a bacterial mat, so it has not been altered by preparators, yet still preserves a generally tuna-like body outline including a dorsal fin. In 1993, Martill admitted that at least some dorsal fin specimens are authentic. The fossil specimens that preserved dorsal fins also showed that the flippers were pointy and often far wider than the underlying bones would suggest. The fins were supported by fibrous tissue. In some specimens, four layers of collagen are visible, the fibres of the covering layers crossing those of the collagen below. In 2017, from the German Posidonia Shale the discovery was reported of 182.7-million-year-old vertebrae of Stenopterygius in a carbonate nodule, still containing collagen fibers, cholesterol, platelets, and red and white blood cells. The structures would not have been petrified, but represent the original organic tissues of which the biomolecules could be identified. The exceptional preservation was explained by the protective environment offered by the nodule. The red blood cells found, were one-fourth to one fifth the size of those of modern mammals. This would have been an adaptation for an improved oxygen absorption, also in view of the low oxygen levels during the Toarcian. The cholesterol had a high-carbon-13 isotope component which might indicate a higher position in the food chain and a diet of fish and cephalopods. In 2018, evidence of blubber was discovered with Stenopterygius. Skin and colouration Typically, fossils that preserve it suggest that the skin of ichthyosaurs was smooth and elastic, lacking scales. However, these remains are not impressions per se, but outlines formed from bacterial growth. In one case, a true impression of the skin was reported from a specimen of Aegirosaurus found in the Solnhofen Plattenkalk, rocks which were capable of preserving even the finest detail. Minuscule scales seemed to be visible in this specimen. The colouration of ichthyosaurs is difficult to determine. In 1956, Mary Whitear reported finding melanocytes, pigment cells in which reddish-brown pigment granules would still be present, in a skin specimen of a British fossil, R 509. Ichthyosaurs are traditionally assumed to have employed countershading (dark on top, light at the bottom) like sharks, penguins, and other modern animals, serving as camouflage during hunting. This was contradicted in 2014 by the discovery of melanosomes, black melanin-bearing structures, in the skin of ichthyosaur specimen YORYM 1993.338 by Johan Lindgren of Lund University. It was concluded that ichthyosaurs were likely uniformly dark coloured for thermoregulation and to camouflage them in deep water while hunting. This is in contrast to mosasaurids and prehistoric leatherback turtles, which were found to be countershaded. However, a 2015 study doubted Lindgren and colleagues' interpretation. This study noted that a basal layer of melanosomes in the skin is ubiquitous in reptile coloration, but does not necessarily correspond to a dark appearance. Other chromatophore structures (such as iridiophores, xanthophores, and erythrophores) affect coloration in extant reptiles but are rarely preserved or identified in fossils. Thus, due to the unknown presence of these chromatophores, YORYM 1993.338, could have been countershaded, green, or various other colors or patterns. In 2018, Lindgren and his colleagues also supported that ichthyosaurs would have been countershaded, on the basis of distributional variation of melanophores that contain eumelanin found on the specimen of Stenopterygius. Gastroliths Gastroliths, stomach stones that might have assisted digestion or regulated buoyancy, have only on a few occasions been found associated with ichthyosaur skeletons, once with a specimen of Nannopterygius and a second time in a Panjiangsaurus fossil. Ichthyosaur coproliths, petrified faeces, are very common, though, already being sold by Mary Anning. Paleobiology Ecology Apart from the obvious similarities to fish, ichthyosaurs also shared parallel developmental features with dolphins, lamnid sharks, and tuna. This gave them a broadly similar appearance, possibly implied similar activity levels (including thermoregulation), and presumably placed them broadly in a similar ecological niche. Ichthyosaurs were not primarily coastal animals; they also inhabited the open ocean. They have been found in all Mesozoic oceans. This is even true of the earliest Ichthyopterygia, making identification of a certain area as their place of origin impossible. Feeding Ichthyosaurs were carnivorous; they ranged so widely in size, and survived for so long, that they are likely to have had a wide range of prey. Species with pointed snouts were adapted to grab smaller animals. McGowan speculated that forms with protruding upper jaws, in the Eurhinosauria, would have used their pointy snouts to slash prey, as has been assumed for swordfish. The most commonly preserved gut contents in ichthyosaurs are the remains of cephalopods. Less commonly, they fed on fish and other vertebrates, including smaller ichthyosaurs. The large Triassic form Thalattoarchon had large, bladed teeth and was probably a macropredator, capable of killing prey its own size, and Himalayasaurus and several species of Temnodontosaurus also shared adaptations for killing very large prey. These food preferences have been confirmed by coproliths which indeed contain the remains of fishes and cephalopods. Another confirmation is provided by fossilised stomach contents. Buckland in 1835 described the presence in a specimen of a large mass of partly digested fishes, recognisable by their scales. Subsequent research in 1968 determined that these belonged to the fish genus Pholidophorus, but also that cephalopod beaks and sucker hooks were present. Such hard food particles apparently were retained by the stomach and regularly regurgitated. Carcasses of drowned animals were eaten as well: in 2003 a specimen of Platypterygius longmani was reported having, besides fishes and a turtle, the bones of a land bird in its stomach. Some early ichthyosaurs were durophagous and had flat convex teeth adapted for crushing shellfish. They thus ate benthos from the floor of shallow seas. Other species were perhaps suction feeders, sucking animals into their mouths by quickly opening their relatively short jaws. This was first assumed for Shonisaurus, which giant by this means might have secured a constant food supply for its huge body, and in 2011 for the short-snouted Guanlingsaurus liangae. However, in 2013 a study concluded that the hyoid bone of ichthyosaurs, at the tongue base, was insufficiently ossified to support a suction feeding movement and suggested the alternative that such species were ram feeders, gathering food by constantly swimming forwards with a wide-open mouth. Typical ichthyosaurs had very large eyes, protected within a bony ring, suggesting that they may have hunted at night or at great depths; the only extant animals with similarly large eyes are the giant and colossal squids. Sight thus seems to have been one of the main senses employed while hunting. Hearing might have been poor, given the very robust form of the stapes. Grooves in the palate however, suggest that smell might have been acute or even that electro-sensory organs might have been present. Ichthyosaurs themselves served as food for other animals. During the Triassic their natural predators mainly consisted of sharks and other ichthyosaurs; in the Jurassic these were joined by large Plesiosauria and Thalattosuchia. This is again confirmed by stomach contents: in 2009 e.g., a plesiosaur specimen was reported with an ichthyosaur embryo in its gut. Locomotion In ichthyosaurs, the main propulsion was provided by a lateral movement of the body. Early forms employed an anguilliform or eel-like movement, with undulations of the entire trunk and tail. This is usually considered rather inefficient. Later forms, like the Parvipelvia, has a shorter trunk and tail and probably used a more efficient carangiform or even thunniform movement, in which the last third of the body, respectively, the tail end, is flexed only. The trunk in such species is rather stiff. The tail was bi-lobed, with the lower lobe being supported by the caudal vertebral column, which was "kinked" ventrally to follow the contours of the ventral lobe. Basal species had a rather asymmetric or "heterocercal" tail fin. The asymmetry differed from that of sharks in that the lower lobe was largest, instead of the upper lobe. More derived forms had a nearly vertical symmetric tail fin. Sharks use their asymmetric tail fin to compensate for the fact that they are negatively buoyant, heavier than water, by making the downward pressure exerted by the tail force the body as a whole in an ascending angle. This way, swimming forwards will generate enough lift to equal the sinking force caused by their weight. In 1973, McGowan concluded that, because ichthyosaurs have a reversed tail fin asymmetry compared to sharks, they were apparently positively buoyant, lighter than water, which would be confirmed by their lack of gastroliths and of pachyostosis or dense bone. The tail would have served to keep the body in a descending angle. The front flippers would be used to push the front of the body further downwards and control pitch. In 1987 however, Michael A. Taylor suggested an alternative hypothesis: as ichthyosaurs could vary their lung content, contrary to sharks (which lack a swimming bladder), they could also regulate their buoyancy. The tail thus mainly served for a neutral propulsion, while small variations in buoyancy were stabilised by slight changes in the flipper angles. In 1992, McGowan accepted this view, pointing out that shark tails are not a good analogy of derived ichthyosaur tails that have more narrow lobes, and are more vertical and symmetric. Derived ichthyosaur tail fins are more like those of tuna fish and indicate a comparable capacity to sustain a high cruising speed. A comparative study by Motani in 2002 concluded that, in extant animals, small tail fin lobes positively correlate with a high beat frequency. Modern researchers generally concur that ichthyosaurs were negatively buoyant. In 1994, Judy Massare concluded that ichthyosaurs had been the fastest marine reptiles. Their length/depth ratio was between three and five, the optimal number to minimise water resistance or drag. Their smooth skin and streamlined bodies prevented excessive turbulence. Their hydrodynamic efficiency, the degree to which energy is converted into a forward movement, would approach that of dolphins and measure about 0.8. Ichthyosaurs would be a fifth faster than plesiosaurs, though half of the difference was explained by assuming a 30% higher metabolism for ichthyosaurs. Together, within Massare's model these effects resulted in a cruising speed of slightly less than five kilometres per hour. However, in 2002, Motani corrected certain mistakes in Massare's formulae and revised the estimated cruising speed to less than two kilometres per hour, somewhat below that of modern Cetacea. However, as the speeds estimated for plesiosaurs and mosasaurids were also revised downwards, ichthyosaurs maintained their relative position. Ichthyosaurs had fin-like limbs of varying relative length. The standard interpretation is that these, together with the dorsal fin and tail fin, were used as control surfaces for directional stability, controlling yaw, and for stabilising pitch and roll, rather than propulsion. However, during the 1980s the German paleontologist Jürgen Riess proposed an alternative model. After having studied the flying movement made by the forelimbs of plesiosaurs, he suggested that at least those ichthyosaurs that had long flippers used them for a powerful propulsive stroke, moving them up and down. This would explain the non-degenerated shoulder girdle and the evolution of the hand bones, whose perfect hydrofoil profile would have been useless if it was not functionally employed. He thought to have discovered modern analogues in the Queensland lungfish and the Amazon river dolphin, which he presumed also used their long fins for propulsion. Riess expounded upon this hypothesis in a series of articles. This alternative interpretation was generally not adopted by other workers. In 1998, Darren Naish pointed out that the lungfish and the river dolphin actually do not use their fins in this way and that e.g. the modern humpback whale has very long front flippers, supported by a mosaic of bones, but that these nevertheless mainly serve as rudders. In 2013, a study concluded that broad ichthyosaur flippers, like those of Platyptergygius, were not used for propulsion but as a control surface. Diving Many extant lung-breathing marine vertebrates are capable of deep diving. There are some indications about the diving capacity of ichthyosaurs. Quickly ascending from a greater depth can cause decompression sickness. The resulting bone necrosis has been well documented with Jurassic and Cretaceous ichthyosaurs, where it is present in 15% and 18% of specimens, respectively, but is rare in Triassic species. This could be a sign that basal forms did not dive as deeply, but might also be explained by a greater predation pressure during the later epochs, more often necessitating a fast flight to the surface. However, this last possibility is contradicted by the fact that, with modern animals, damage is not caused by a limited number of rapid ascension incidents, but by a gradual accumulation of non-invalidating degeneration during normal diving behaviour. Additional evidence is provided by the eyes of ichthyosaurs that among vertebrates are both relatively and absolutely the largest known. Modern leopard seals can dive to up to hunting on sight. Motani suggested that ichthyosaurs, with their relatively much larger eye sockets, should have been able to reach even greater depths. Temnodontosaurus, with eyes that had a diameter of twenty-five centimetres, could probably still see at a depth of 1,600 metres. At these depths, such eyes would have been especially useful to see large objects. Later species, such as Ophthalmosaurus, had relatively larger eyes, again an indication that diving capacity was better in late Jurassic and Cretaceous forms. Metabolism Similar to modern cetaceans, such as whales and dolphins, ichthyosaurs were air-breathing. Whales and dolphins are mammals and warm-blooded. Of ichthyosaurs it was traditionally assumed that they were cold-blooded, being reptiles. However, since the 1970s many dominant reptile groups of the Mesozoic, such as theropod dinosaurs, pterosaurs and plesiosaurs, have been considered warm-blooded, as this offers an elegant explanation of their dominance. Some direct evidence is available that ichthyosaurs too might have been endothermic. In 1990, Vivian de Buffrénil published a histological study, indicating that ichthyosaurs possessed a fibrolamellar bone structure, as with warm-blooded animals in general, typified by fast growth and a strong vascularisation. Early Triassic species already show these traits. In 2012, it was reported that even the very basal form Utatsusaurus had this bone type, indicating that the ancestors of ichthyosaurs were already warm-blooded. Additional direct proof for a high metabolism is the isotopes of oxygen ratio in the teeth, which indicates a body temperature of between 35 and 39 °C, about 20° higher than the surrounding seawater. Blubber is consistent with warm-bloodedness as the insulating qualities require the animal to generate its own heat. Indirect evidence for endothermy is provided by the body shape of derived ichthyosaurs, which with its short tail and vertical tail fin seems optimised for a high cruising speed that can only be sustained by a high metabolism: all extant animals swimming this way are either fully warm-blooded or, like sharks and tuna, maintain a high temperature in their body core. This argument does not cover basal forms with a more eel-like body and undulating swimming movement. In 1996, Richard Cowen, while accepting endothermy for the group, presumed that ichthyosaurs would have been subject to Carrier's constraint, a limitation to reptilian respiration pointed out in 1987 by David Carrier: their undulated locomotion forces the air out of the lungs and thus prevents them from taking breath while moving. Cowen hypothesised that ichthyosaurs would have overcome this problem by porpoising: constantly jumping out of the water would have allowed them to take a gulp of fresh air during each jump. Other researchers have tended to assume that for at least derived ichthyosaurs Carrier's constraint did not apply, because of their stiff bodies, which seems to be confirmed by their good diving capacity, implying an effective respiration and oxygen storage system. For these species porpoising was not a necessity. Nevertheless, ichthyosaurs would have often surfaced to breathe, probably tilting their heads slightly to take in air, because of the lower position of the nostrils compared to that of dolphins. Reproduction Ichthyosaurs were viviparous, i.e. bore live young instead of laying eggs. Although they were reptiles and descended from egg-laying, oviparous, ancestors, viviparity is not as unexpected as it first appears. Air-breathing marine creatures must either come ashore to lay eggs, like turtles and some sea snakes, or else give birth to live young in surface waters, like whales and dolphins. Given their streamlined and transversely flattened bodies, heavily adapted for fast swimming, it would have been difficult, if not impossible, for ichthyosaurs to move far enough on land to lay eggs. This was confirmed as early as 9 December 1845 when naturalist Joseph Chaning Pearce reported a small embryo in a fossil of Ichthyosaurus communis. The embryo, with a length of eleven centimetres, was positioned in the birth canal of its two-and-a-half metre long mother, with its head pointed to the rear. Pearce concluded from the fossil that ichthyosaurs had to have been viviparous. Later, from the Holzmaden deposits numerous adult fossils were found containing fetuses. In 1880, Harry Govier Seeley, heading a special British paleontological committee studying the problem of ichthyosaur reproduction, concluded that birth was given in the water and that fossils containing fetuses in the birth canal probably represented cases of premature death of the juvenile, causing the demise of the mother animal as well. A comparison has been made with dolphins and whales, whose young need to be born tail-first to prevent drowning; if the juvenile is born head-first, it dies and the mother with it if the corpse gets stuck in the birth canal. However, an alternative explanation is that such fossils actually represent females that had died for other reasons while pregnant, after which the decomposition gasses drove out the fetuses head-first. In 2014, a study reported the find of a fossilized Chaohusaurus female that had died while giving birth to three neonates. Two had already been expelled while a third was present in the birth canal. The fossil also documented that early ichthyosaurs were also born head first, perhaps opposed to later genera. As Chaohusaurus is a very basal ichthyopterygian—previously, the most basal genus of which fetuses were known, had been Mixosaurus—this discovery suggests that the earliest land-dwelling ancestors of ichthyosaurs had already been viviparous. A comprehensive multi-author study published in 2023 examined the evolution of fetal orientation of ichthyosaurs based on known specimens of gravid female ichthyosaurs. Specimens of basal ichthyosaurs, Chaohusaurus and Cymbospondylus, showed evidence of head-first birth, while Mixosaurus had evidence of both head-first and tail-first birth based on three specimens. More derived ichthyosaurus including Stenopterygius, Besanosaurus, Qianichthyosaurus and Platypterygius showed evidence of tail-first birth. This indicates that while basal ichthyosaurs were born with head-first, merriamosaurian ichthyosaurs had preference of tail-first birth over head-first birth. The authors asserted that the derived ichthyosaurs' preference of tail-first birth may have been because it was easy for the female to push on the cranium rather than the pelvis when giving birth, or because it could reduce maternal energy expenditure on trim control. They disagreed with the "increased asphyxiation risk" hypothesis for tail-first birth preference, given that Mixosaurus showed evidence of both fetal orientation of head-first and tail-first birth; if this was indeed the reason, there should have been a higher preference for tail-first births caused by strong stabilizing selection for this trait much earlier in the evolutionary history of every aquatic, viviparous tetrapod clades, which isn't the case. Compared with placental mammals or plesiosaurs, ichthyosaur fetuses tend to be very small and their number per litter is often high. In one female of Stenopterygius seven have been identified, in another eleven. The fetuses have at most a quarter of the length of the mother animal. The juveniles have about the same body proportions as adult individuals. The main ontogenetical changes during growth consist in the fusion and greater robustness of the skeletal elements. At least one neonate I. communis individual has been identified, with preserved stomach contents indicating feeding on cephalopods and fish. This is unlike other similar species, such as Stenopterygius, where feeding niches shift from small fish to larger cephalopods through ontogeny. Crocodiles, most sea turtles and some lizards determine the sex of their offspring by manipulating the temperature of the developing eggs' environment; i.e. they do not have distinct sex chromosomes. Live-bearing reptiles do not regulate sex through incubation temperature. A study in 2009, which examined 94 living species of reptiles, birds and mammals, found that the genetic control of sex appears to be crucial to live birth. It was concluded that with marine reptiles such control predated viviparity and was an adaptation to the stable sea-climate in coastal regions. Genetics likely controlled sex in ichthyosaurs, mosasaurs and plesiosaurs. Social behaviour and intelligence Ichthyosaurs are often assumed to have lived in herds or hunting groups. Little evidence is available about the nature of ichthyosaur social behaviour. Some indications exist that a level of sexual dimorphism was present. Skeletons of Eurhinosaurus and Shastasaurus show two morphotypes. Individuals with a longer snout, larger eyes, a longer trunk, a shorter tail, and longer flippers with additional phalanges, could have represented the females; the longer trunk may have provided room for the embryos. Generally, the brain shows the limited size and elongated shape of that of modern cold-blooded reptiles. However, in 1973, McGowan, while studying the natural endocast of a well-preserved specimen, pointed out that the telencephalon was not very small. The visual lobes were large, as could be expected from the eye size. The olfactory lobes were, though not especially large, well-differentiated; the same was true of the cerebellum. Pathologies Though fossils revealing ichthyosaur behavior remain rare, one ichthyosaur fossil is known to have sustained bites to the snout region. Discovered in Australia, and analyzed by Benjamin Kear et alii in 2011, measurements of the wounds reveal that the bite marks were inflicted by another ichthyosaur, likely of the same species, a probable case of face biting during a conflict. The wounds show signs of healing in the form of bone growth, meaning that the victim survived the attack. Another, very large ichthyosaur close to nine metres in length was found in Svalbard; it was nearly complete save for its tail. Scrutiny of the find revealed that while hunting ammonites (as evidenced by an ammonite shell in the throat region), the ichthyosaur was ambushed and attacked, likely by a pliosaurid (known from the same habitat), which severed its tail. The ichthyosaur then sank to the depths, drowning and eventually becoming fossilized in the deep water. The find was revealed to the public in the National Geographic special Death of a Sea Monster. Geological formations The following is a list of geological formations in which ichthyosaur fossils have been found:
Biology and health sciences
Prehistoric marine reptiles
Animals
4053383
https://en.wikipedia.org/wiki/Skeleton%20key
Skeleton key
A skeleton key (also known as a passkey) is a type of master key in which the serrated edge has been removed in such a way that it can open numerous locks, most commonly the warded lock. The term derives from the fact that the key has been reduced to its essential parts. Master keys A skeleton key is a key that has been filed or cut to create one that can be used to unlock a variety of warded locks each with a different configuration of wards. This can usually be done by removing most of the center of the key, allowing it to pass by the wards without interference, operating the lock. To counteract the illicit creation of such keys, locksmiths can put wards not just in the center but on the outside as well, making the creation of a skeleton key more difficult. Lever lock skeleton keys are used in a lock with usually three or five levers and a set of wards that come into contact with the bit of the key only on the sides—the top is for pushing the levers to their correct heights while the warded section of the key just has to pass uninterrupted to allow the key to rotate fully. A master key system of lever locks has the same lever heights in all locks. Each door will have different wards and can only be opened by the correctly warded key or the master key. A skeleton key has the warded section of the key removed so that it opens all the doors of a system. Some applications, such as a building with multiple entrance doors, have numerous locks that are keyed alike; one key will open every door. A keyed-alike system is different from a master key system as none of the locks have a key that can open only that lock. Skeleton keys have often been associated with attempts to defeat locks for illicit purposes, to release handcuffs for example, and standard keys have been filed down for that purpose. Legitimate skeleton or master keys are used in many modern contexts where lock operation is required and the original key has been lost or is not available. In hotels without electronic locks, skeleton keys are used by housekeeping services to enter the rooms.
Technology
Mechanisms
null
4055928
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
Structure (mathematical logic)
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory. From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics. For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory). In database theory, structures with no functions are studied as models for relational databases, in the form of relational models. History In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it. Definition Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure. Domain The domain of a structure is an arbitrary set; it is also called the of the structure, its (especially in universal algebra), its (especially in model theory, cf. universe), or its . In classical first-order logic, the definition of a structure prohibits the empty domain. Sometimes the notation or is used for the domain of but often no notational distinction is made between a structure and its domain (that is, the same symbol refers both to the structure and its domain.) Signature The signature of a structure consists of: a set of function symbols and relation symbols, along with a function that ascribes to each symbol a natural number The natural number of a symbol is called the arity of because it is the arity of the interpretation of Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field. Interpretation function The interpretation function of assigns functions and relations to the symbols of the signature. To each function symbol of arity is assigned an -ary function on the domain. Each relation symbol of arity is assigned an -ary relation on the domain. A nullary (-ary) function symbol is called a constant symbol, because its interpretation can be identified with a constant element of the domain. When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol and its interpretation For example, if is a binary function symbol of one simply writes rather than Examples The standard signature for fields consists of two binary function symbols and where additional symbols can be derived, such as a unary function symbol (uniquely determined by ) and the two constant symbols and (uniquely determined by and respectively). Thus a structure (algebra) for this signature consists of a set of elements together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers the real numbers and the complex numbers like any other field, can be regarded as -structures in an obvious way: In all three cases we have the standard signature given by with and The interpretation function is: is addition of rational numbers, is multiplication of rational numbers, is the function that takes each rational number to and is the number and is the number and and are similarly defined. But the ring of integers, which is not a field, is also a -structure in the same way. In fact, there is no requirement that of the field axioms hold in a -structure. A signature for ordered fields needs an additional binary relation such as or and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word. The ordinary signature for set theory includes a single binary relation A structure for this signature consists of a set of elements and an interpretation of the relation as a binary relation on these elements. Induced substructures and closed subsets is called an (induced) substructure of if and have the same signature the domain of is contained in the domain of and the interpretations of all function and relation symbols agree on The usual notation for this relation is A subset of the domain of a structure is called closed if it is closed under the functions of that is, if the following condition is satisfied: for every natural number every -ary function symbol (in the signature of ) and all elements the result of applying to the -tuple is again an element of For every subset there is a smallest closed subset of that contains It is called the closed subset generated by or the hull of and denoted by or . The operator is a finitary closure operator on the set of subsets of . If and is a closed subset, then is an induced substructure of where assigns to every symbol of σ the restriction to of its interpretation in Conversely, the domain of an induced substructure is a closed subset. The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail. Examples Let be again the standard signature for fields. When regarded as -structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms. The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield. The most obvious way to define a graph is a structure with a signature consisting of a single binary relation symbol The vertices of the graph form the domain of the structure, and for two vertices and means that and are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let be a graph consisting of two vertices connected by an edge, and let be the graph consisting of the same vertices but no edges. is a subgraph of but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs. Homomorphisms and embeddings Homomorphisms Given two structures and of the same signature σ, a (σ-)homomorphism from to is a map that preserves the functions and relations. More precisely: For every n-ary function symbol f of σ and any elements , the following equation holds: . For every n-ary relation symbol R of σ and any elements , the following implication holds: where , is the interpretation of the relation symbol of the object theory in the structure , respectively. A homomorphism h from to is typically denoted as , although technically the function h is between the domains , of the two structures , . For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms. A homomorphism is sometimes called strong if: For every n-ary relation symbol R of the object theory and any elements such that , there are such that and The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above. Embeddings A (σ-)homomorphism is called a (σ-)embedding if it is one-to-one and for every n-ary relation symbol R of σ and any elements , the following equivalence holds: (where as before , refers to the interpretation of the relation symbol R of the object theory σ in the structure , respectively). Thus an embedding is the same thing as a strong homomorphism which is one-to-one. The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom. Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom. Example As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure. Homomorphism problem The following problem is known as the homomorphism problem: Given two finite structures and of a finite relational signature, find a homomorphism or show that no such homomorphism exists. Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the complexity of CSP can be studied using the methods of finite model theory. Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem. Structures and first-order logic Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer. Satisfaction relation Each first-order structure has a satisfaction relation defined for all formulas in the language consisting of the language of together with a constant symbol for each element of which is interpreted as that element. This relation is defined inductively using Tarski's T-schema. A structure is said to be a model of a theory if the language of is the same as the language of and every sentence in is satisfied by Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms. Definable relations An -ary relation on the universe (i.e. domain) of the structure is said to be definable (or explicitly definable cf. Beth definability, or -definable, or definable with parameters from cf. below) if there is a formula such that In other words, is definable if and only if there is a formula such that is correct. An important special case is the definability of specific elements. An element of is definable in if and only if there is a formula such that Definability with parameters A relation is said to be definable with parameters (or -definable) if there is a formula with parameters from such that is definable using Every element of a structure is definable using the element itself as a parameter. Some authors use definable to mean definable without parameters, while other authors mean definable with parameters. Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists. Implicit definability Recall from above that an -ary relation on the universe of is explicitly definable if there is a formula such that Here the formula used to define a relation must be over the signature of and so may not mention itself, since is not in the signature of If there is a formula in the extended language containing the language of and a new symbol and the relation is the only relation on such that then is said to be implicitly definable over By Beth's theorem, every implicitly definable relation is explicitly definable. Many-sorted structures Structures as defined above are sometimes called s to distinguish them from the more general s. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers. Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols: If V is a vector space over a field F, the corresponding two-sorted structure consists of the vector domain , the scalar domain , and the obvious functions, such as the vector zero , the scalar zero , or scalar multiplication . Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly. In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory. Other generalizations Partial algebras Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.  x y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1. In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity. Structures for typed languages In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type. Higher-order languages There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language. Structures that are proper classes In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class. In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain.
Mathematics
Model theory
null
4056309
https://en.wikipedia.org/wiki/Banded%20bullfrog
Banded bullfrog
The banded bullfrog (Kaloula pulchra) is a species of frog in the narrow-mouthed frog family Microhylidae. Native to Southeast Asia, it is also known as the Asian painted frog, digging frog, Malaysian bullfrog, common Asian frog, and painted balloon frog. In the pet trade, it is sometimes called the chubby frog. Adults measure and have a dark brown back with stripes that vary from copper-brown to salmon pink. The banded bullfrog lives at low altitudes and is found in both urban and rural settings, as well as in forest habitats. They bury themselves underground during dry periods and emerge after heavy rainfall to emit calls and breed. They feed primarily on ants and termites; predators of adults and tadpoles include snakes, dragonfly larvae, and snails. When threatened, they inflate their lungs and secrete a noxious white substance. The species is prevalent in the pet trade and is a potential invasive species being introduced in Taiwan, the Philippines, Guam, Singapore, Borneo, and Sulawesi. Taxonomy and etymology The banded bullfrog was first described in 1831 by the British zoologist John Edward Gray, as Kaloula pulchra (pulchra meaning "beautiful" in Latin). Cantor (1847) described the species under the name Hylaedactylus bivittatus, which was synonymized with K. pulchra by Günther (1858). The subspecies K. p. hainana was described by Gressitt (1938) as having a shorter snout and hind legs compared to the nominate subspecies, K. p. pulchra. A former subspecies in Sri Lanka, originally named K. p. taprobanica by Parker (1934), has since been reclassified as a separate species, Uperodon taprobanicus. Bourret (1942) described a subspecies K. p. macrocephala that is now considered by several authors to be a distinct species, K. macrocephala. According to Darrel Frost's Amphibian Species of the World, common names for Kaloula pulchra include the Malaysian narrowmouth toad, Asian painted frog, digging frog, painted bullfrog, Malaysian bullfrog, painted burrowing frog, common Asian bullfrog, painted balloon frog, and painted microhylid frog. It is also known as the chubby frog in the pet trade. Description The banded bullfrog is medium-sized with a stocky, triangular body and a short snout. Males grow to a snout–vent length (SVL) of and females are slightly larger, reaching an SVL of . Other than the slight difference in length, there is very limited sexual dimorphism. They have a body weight of . The back is dark brown with stripes that vary from copper-brown to salmon pink, and the abdomen is cream-colored. Tadpoles are about long after hatching and reach an SVL of about at the end of metamorphosis. They have an oval body that is brown or black with a pale belly, a round snout, and a moderately long, tapered tail with yellow speckles and tall fins. The eyes are relatively small and the side of the head, with black or dark gray irises and a golden ring around the pupil. They do not possess any tail filament. During metamorphosis, their eyes increase in size and bulge and they develop slender limbs and digits with rounded tips. The tadpoles metamorphose beginning at two weeks. Distribution and habitat The species is native to Southeast Asia. It is common over a range from northeastern India, and Nepal, to southern India and Sri Lanka to southern China (especially Hainan) and Myanmar, and south to the islands of maritime Southeast Asia. Its wide distribution, compared to the related species Kaloula assamensis, has been attributed to its burrowing ability. The banded bullfrog has been found at elevations between sea level and above sea level. It can occur in both urban and rural settings, and in forest habitats. As an invasive species The banded bullfrog is a potential invasive species. It has been introduced through both the pet trade and maritime transport, and has become established in Taiwan, the Philippines, Guam, Singapore, Borneo, and Sulawesi. Some specimens have been observed in Australia and New Zealand. Its introduction into the Philippines was likely accidental, via contamination of plant nursery materials or stowaways on ships and boats. Several species, likely introduced through the pet trade, were observed in Florida in 2006 and 2008; however, as of 2011, the population is under control and there is no evidence of reproduction. The frog was observed at an airport in Perth, Australia, and at a cargo port in New Zealand, but no established invasive population has been found in either country as of 2019. Behaviour and ecology Breeding is stimulated by heavy monsoon rains, after which the frogs relocate from underground to rain pools or ponds. They are more commonly found on wetter nights, and while they are not reproductively active during dry periods, their gonads remain ripe so that they can mate soon after rainfall. In India, the male frogs call after the monsoon season begins in April or May. The pulses of the calls recorded in India were 28–56 per second with a frequency range of 50–1760 Hz. In Thailand the dominant frequency was 250 Hz (duration 560–600 ms long) and 18–21 pulses per call. Their form is suited for walking and burrowing rather than jumping. They are able to survive dry conditions by burying themselves in the ground and waiting for rain; the burrowing also helps them avoid predators. When burrowing they dig their way down hindlimb first and use their forelimbs to push themselves several inches under the soil, where they can remain for the duration of the dry season. Banded bullfrogs hide under leaf litter during the daylight hours and eat in the evening. They have been found in trees and have been observed hunting termites in them. Diet, predators, and parasites In the wild, the banded bullfrog primarily eats ants and termites. It also feeds on other small invertebrates including flies, crickets, moths, grasshoppers, and earthworms. Its relatively small head and mouth mostly limit its diet to small and slow-moving prey. The feeding cycle from opening of the mouth to closing is about 150 milliseconds and is relatively symmetrical, meaning that the bullfrog spends an equal amount of time extending its tongue and bringing the prey into the mouth. Banded bullfrogs kept as pets can be fed insects such as crickets, mealworms, insect larvae, and beetles. Snakes such as the kukri snake are predators of adult banded bullfrogs. For eggs and tadpoles, predators include dragonfly larvae and snails such as the golden apple snail. Banded bullfrogs display deimatic behaviour when threatened, greatly inflating their bodies in an attempt to distract or startle predators. By inflating its body and bending its head down, the bullfrog can appear larger than its actual size. It also secretes a noxious white substance through its skin that is distasteful, though non-toxic, to predators. The secretion contains a trypsin inhibitor and can induce hemolysis (rupturing of red blood cells). Parasites include parasitic worms that have been found in the frog's intestinal mesentery and leeches that attach to the frog's back. Pet trade Commonly sold in pet stores, banded bullfrogs thrive in terrariums with substrate choices consisting of peat–soil mixes or moss mixtures. In contrast to the ant and termite diets of wild bullfrogs, captive bullfrogs typically feed on slightly larger insects such as crickets or mealworms. A survey of internet pet trade listings between 2015 and 2018 in Europe and the United States found that there were three to four times as many offers as requests for the banded bullfrog, with no evidence of captive breeding. In the Philippines, traders collect the frogs locally. Low interest in the Philippine pet trade has been attributed to the bullfrog's muted colours and burrowing behavior. Máximo and colleagues hypothesize that the species has been illegally sold in South America for decades, based on identifications in Argentina during the 1980s and in Brazil in 2020. Conservation status The International Union for Conservation of Nature listed the species as least concern due to its extensive distribution, tolerance of a wide range of environments, and predicted large population. In many regions, the banded bullfrog is captured for consumption, but this does not appear to have a substantial impact on its population.
Biology and health sciences
Frogs and toads
Animals
4058774
https://en.wikipedia.org/wiki/Human%20reproduction
Human reproduction
Human sexual reproduction, to produce offspring, begins with fertilization. Successful reproduction typically involves sexual intercourse between a healthy, sexually mature and fertile male and female. During sexual intercourse, sperm cells are ejaculated into the vagina through the penis, resulting in fertilization of an ovum to form a zygote. While normal cells contain 46 chromosomes (23 pairs), gamete cells contain only half that number, and it is when these two cells merge into one combined zygote cell that genetic recombination occurs. The zygote then undergoes a defined development process that is known as human embryogenesis, and this starts the typical 38-week gestation period for the embryo (and eventually foetus) that is followed by childbirth. Assisted reproductive technology also exists, like IVF, some of which involve alternative methods of fertilization, which do not involve sexual intercourse; the fertilization of the ovum may be achieved by artificial insemination methods. Biological and legal requirements In order for human reproduction to be achieved, an individual must have undergone puberty first, requiring ovulation in females and the spermarche in males to have occurred prior to engaging in sexual intercourse or achieving pregnancy through non-penetrative means. Before puberty, humans are infertile, as their genitals lack reproductive function (only being able to discharge urine). Legal factors also play a vital role in the achievement of human reproduction: a minor under the age of consent cannot give legal consent to sexual intercourse or artificial alternatives to reproduction, the former case of which is liable to have the older party charged with statutory rape, depending on jurisdictions. Even for minors above the age of consent, comprehensive sex education advises both consenting parties to use contraception to avoid both sexually transmitted infections and early, unplanned/unwanted pregnancies. Pregnancy in girls under the age of 15 is especially discouraged due to their reproductive systems having yet to reach full maturity. Anatomy Male reproductive system The male reproductive system contains two main divisions: the testicles where sperm are produced, and the penis where semen is ejaculated through the urethra. In humans, both of these organs are outside the abdominal cavity. Having the testicles outside the abdomen facilitates temperature regulation of the sperm, which require specific temperatures to survive about 2-3 °C less than the normal body temperature i.e. 37 °C. In particular, the extraperitoneal location of the testicles may result in a 2-fold reduction in the heat-induced contribution to the spontaneous mutation rate in male germinal tissues compared to tissues at 37 °C. If the testicles remain too close to the body, it is likely that the increase in temperature will harm the spermatozoa formation, making conception more difficult. This is why the testes are carried in an external scrotum rather than within the abdomen; they normally remain slightly cooler than body temperature, facilitating sperm production. Male germ cells produced in the testes are able to perform special DNA repair processes during meiosis that act to repair DNA damages and to maintain the integrity of the genomes that are to be passed on to progeny. Two of these DNA repair processes are homologous recombinational repair and non-homologous end joining. Female reproductive system The female reproductive system likewise contains two main divisions: the external genitalia (the vulva) and the internal genitalia. The ovum meets with the sperm cell: a sperm may penetrate and merge with the egg, fertilizing it with the help of certain hydrolytic enzymes present in the acrosome. The fertilization usually occurs in the fallopian tubes, but can happen in the uterus itself. The zygote then becomes implanted in the lining of the uterus, where it begins the processes of embryogenesis and morphogenesis. When the fetus is developed enough to survive outside of the uterus, the cervix dilates and contractions of the uterus propel it through the birth canal, which is the vagina, and thereby gives external life to the newborn infant. This process is called childbirth. The ova, which are the female sex cells, are much larger than the spermatozoon and are normally formed within the ovaries of the female fetus before birth. They are mostly fixed in location within the ovary until their transit to the uterus, and contain nutrients for the later zygote and embryo. Over a regular interval known as the menstrual cycle, in response to hormonal signals, a process of oogenesis matures one ovum which is released and sent down the fallopian tube. If not fertilized, this egg is flushed out of the system through menstruation. Oocytes (female germ cells) located in the primordial follicle of the ovary are in a non-growing prophase arrested state, but are able to undergo highly efficient homologous recombinational repair of DNA damages including double-strand breaks. This capability allows the maintenance of genome integrity and protection of the health of offspring. Process of fertilization Human reproduction normally begins with copulation, though it may be achieved through artificial insemination, and is followed by nine months of pregnancy before childbirth. Pregnancy can be avoided with the use of contraceptives such as condoms and intrauterine devices. Copulation Human reproduction naturally takes place as internal fertilization by sexual intercourse. During this process, the man inserts his erect penis into the woman's vagina and then either partner initiates rhythmic pelvic thrusts until the man achieves orgasm, which leads to ejaculation of semen containing sperm into the vaginal canal. The sperm and the ovum are known as the gametes (each containing half the genetic information of the parent, created through meiosis). The sperm (being one of approximately 250 million sperm in a typical ejaculation) travels through the vagina and cervix into the uterus or fallopian tubes. Only 1 in 14 million of the ejaculated sperm will reach the fallopian tube. The egg simultaneously moves through the fallopian tube away from the ovary. One of the sperm encounters, penetrates and fertilizes the ovum, creating a zygote. Upon fertilization and implantation, gestation of the fetus then occurs within the uterus. Pregnancy rates for sexual intercourse are highest during the menstrual cycle time from some 5 days before until 1 to 2 days after ovulation. For optimal pregnancy chance, there are recommendations of sexual intercourse every 1 or 2 days, or every 2 or 3 days. Studies have shown no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina. Alternative methods As an alternative to natural sexual intercourse, there exists artificial insemination, where sperm is introduced into the female reproductive system without the insertion of the penis. There are also many methods of assisted reproductive technology, such as in vitro fertilization, where one or more egg cells are retrieved from a woman's ovaries and co-incubated with sperm outside the body. The resulting embryo can then be reinserted into the womb of the woman. Pregnancy Pregnancy is the period of time during which the fetus develops, dividing via mitosis inside the uterus. During this time, the fetus receives all of its nutrition and oxygenated blood from the mother, filtered through the placenta, which is attached to the fetus' abdomen via an umbilical cord. This drain of nutrients can be quite taxing on the mother, who is required to ingest slightly higher levels of calories. In addition, certain vitamins and other nutrients are required in greater quantities than normal, often creating abnormal eating habits. Gestation period is about 266 days in humans. While in the uterus, the baby first endures a very brief zygote stage, then the embryonic stage, which is marked by the development of major organs and lasts for approximately eight weeks, then the fetal stage, which revolves around the development of bone cells while the fetus continues to grow in size. It is estimated that about 3-5% of couples are infertile and the fecunditity of couples is around 30% for each menstrual cycle. Labor and birth Labor is separated into 4 stages. The first stage involves latent phase and active phase separated by the dilation of the cervix for 6 to 10 cm. The second stage is the pushing stage. The third stage involves the delivery of the placenta. And the last stage is the contraction of the uterus. Once the fetus is sufficiently developed, chemical signals begin the process of birth, which begins with the fetus being pushed out of the birthing canal. The newborn, which is called an infant in humans, should typically begin respiration on its own shortly after birth. Not long after, the placenta eventually falls off on its own. The person assisting the birth may also sever the umbilical cord. Discovery of mechanism While most ancient human societies believed that sexual intercourse was necessary for reproduction, the reasons some sex did not result in children, and the mechanism by which mating produced children were not understood. The theory of preformationism was popular in Ancient Greece and Christendom for centuries. Because they are too small to see with the naked eye, it was only after his invention of the microscope that Antonie van Leeuwenhoek discovered spermatozoa in 1677. Mitosis and meiosis were not discovered until the late 1800s.
Biology and health sciences
Human reproduction
Biology
4059023
https://en.wikipedia.org/wiki/Search%20engine
Search engine
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news. For a search provider, its engine is part of a distributed computing system that can encompass many data centers throughout the world. The speed and accuracy of an engine's response to a query is based on a complex system of indexing that is continuously updated by automated web crawlers. This can include data mining the files and databases stored on web servers, but some content is not accessible to crawlers. There have been many search engines since the dawn of the Web in the 1990s, but Google Search became the dominant one in the 2000s and has remained so. It currently has a 90% global market share. Other search engines with a smaller market share include Bing at 4%, Yandex at 2%, and Yahoo at 1%. Other search engines not listed have less than a 3% market share. The business of websites improving their visibility in search results, known as marketing and optimization, has thus largely focused on Google. History Pre-1990s In 1945, Vannevar Bush described an information retrieval system that would allow a user to access a great expanse of information, all at a single desk. He called it a memex. He described the system in an article titled "As We May Think" that was published in The Atlantic Monthly. The memex was intended to give a user the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. Vannevar Bush envisioned libraries of research with connected annotations, which are similar to modern hyperlinks. Link analysis eventually became a crucial component of search engines through algorithms such as Hyper Search and PageRank. 1990s: Birth of search engines The first internet search engines predate the debut of the Web in December 1990: WHOIS user search dates back to 1982, and the Knowbot Information Service multi-network user search was first implemented in 1989. The first well documented search engine that searched content files, namely FTP files, was Archie, which debuted on 10 September 1990. Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One snapshot of the list in 1992 remains, but as more and more web servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!". The first tool used for searching content (as opposed to users) on the Internet was Archie. The name stands for "archive" without the "v". It was created by Alan Emtage, computer science student at McGill University in Montreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie Search Engine did not index the contents of these sites since the amount of data was so limited it could be readily searched manually. The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993. In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format. JumpStation (created in December 1993 by Jonathon Fletcher) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any web page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor. The first popular search engine on the Web was Yahoo! Search. The first product from Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory. It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages. Soon after, a number of search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could also browse the directory instead of doing a keyword-based search. In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page ranking and received a US patent for the technology. It was the first search engine that used hyperlinks to measure the quality of websites it was indexing, predating the very similar algorithm patent filed by Google two years later in 1998. Larry Page referenced Li's work in some of his U.S. patents for PageRank. Li later used his Rankdex technology for the Baidu search engine, which was founded by him in China and launched in 2000. In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite. Google adopted the idea of selling search terms in 1998 from a small search engine company named goto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet. Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in March 2000. 2000s–present: Post dot-com bubble Around 2000, Google's search engine rose to prominence. The company achieved better results for many searches with an algorithm called PageRank, as was explained in the paper Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of Google. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Larry Page's patent for PageRank cites Robin Li's earlier RankDex patent as an influence. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal. In fact, the Google search engine became so popular that spoof engines emerged such as Mystery Seeker. By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings from Looksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot). Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology. active search engine crawlers include those of Google, Sogou, Baidu, Bing, Gigablast, Mojeek, DuckDuckGo and Yandex. Approach A search engine maintains the following processes in near real time: Web crawling Indexing Searching Web search engines get their information by web crawling from site to site. The "spider" checks for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to be indexed depending on many factors, such as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially". Indexing means associating words and other definable tokens found on web pages to their domain names and HTML-based fields. The associations are made in a public database, made available for web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible. Some of the techniques for indexing, and caching are trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis. Between visits by the spider, the cached version of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as a web proxy instead. In this case, the page may differ from the search terms indexed. The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form of linkrot. Typically when a user enters a query into a search engine it is a few keywords. The index already has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must be weighted according to information in the indexes. Then the top search result item requires the lookup, reconstruction, and markup of the snippets showing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing. Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create by filtering and weighting while refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. It is also possible to weight by date because each page has a modification time. Most search engines support the use of the Boolean operators AND, OR and NOT to help end users refine the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work. Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to have their listings ranked higher in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads. Local search Local search is the process that optimizes the efforts of local businesses. They focus on change to make sure all searches are consistent. It is important because many people determine where they plan to go and what to buy based on their searches. Market share Google is by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines were Bing at 4%, Yandex at 2%, Yahoo! at 1%. Other search engines not listed have less than a 3% market share. In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice. Russia and East Asia In Russia, Yandex has a market share of 62.6%, compared to Google's 28.3%. Yandex is the second most used search engine on smartphones in Asia and Europe. In China, Baidu is the most popular search engine. South Korea-based search portal Naver is used for 62.8% of online searches in the country. Yahoo! Japan and Yahoo! Taiwan are the most popular choices for Internet searches in Japan and Taiwan, respectively. China is one of few countries where Google is not in the top three web search engines for market share. Google was previously more popular in China, but withdrew significantly after a disagreement with the government over censorship and a cyberattack. Bing, however, is in the top three web search engines with a market share of 14.95%. Baidu is top with 49.1% of the market share. Europe Most countries' markets in the European Union are dominated by Google, except for the Czech Republic, where Seznam is a strong competitor. The search engine Qwant is based in Paris, France, where it attracts most of its 50 million monthly registered users from. Search engine bias Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide and the underlying assumptions about the technology. These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in its organic search results), and political processes (e.g., the removal of search results to comply with local laws). For example, Google will not surface certain neo-Nazi websites in France and Germany, where Holocaust denial is illegal. Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results. Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries. Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons. Several scholars have studied the cultural changes triggered by search engines, and the representation of certain controversial topics in their results, such as terrorism in Ireland, climate change denial, and conspiracy theories. Customized results and filter bubbles There has been concern raised that search engines such as Google and Bing provide customized results based on the user's activity history, leading to what has been termed echo chambers or filter bubbles by Eli Pariser in 2011. The argument is that search engines and social media platforms use algorithms to selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint. According to Eli Pariser users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users, such as DuckDuckGo. However many scholars have questioned Pariser's view, finding that there is little evidence for the filter bubble. On the contrary, a number of studies trying to verify the existence of filter bubbles have found only minor levels of personalisation in search, that most people encounter a range of views when browsing online, and that Google news tends to promote mainstream established news outlets. Religious search engines The global growth of the Internet and electronic media in the Arab and Muslim world during the last decade has encouraged Islamic adherents in the Middle East and Asian sub-continent, to attempt their own search engines, their own filtered search portals that would enable users to perform safe searches. More than usual safe search filters, these Islamic web portals categorizing websites into being either "halal" or "haram", based on interpretation of Sharia law. ImHalal came online in September 2011. Halalgoogling came online in July 2013. These use haram filters on the collections from Google and Bing (and others). While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects like Muxlim (a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google, and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith. Search engine submission Web search engine submission is a process in which a webmaster submits a website directly to a search engine. While search engine submission is sometimes presented as a way to promote a website, it generally is not necessary because the major search engines use web crawlers that will eventually find most web sites on the Internet without assistance. They can either submit one web page at a time, or they can submit the entire site using a sitemap, but it is normally only necessary to submit the home page of a web site as search engines are able to crawl a well designed website. There are two remaining reasons to submit a web site or web page to a search engine: to add an entirely new web site without waiting for a search engine to discover it, and to have a web site's record updated after a substantial redesign. Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website's ranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller of Google has stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking. Comparison to social bookmarking Technology Archie The first web search engine was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The author originally wanted to call the program "archives", but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on. The primary method of storing and retrieving files was via the File Transfer Protocol (FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol. Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them. Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file. Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database. Veronica In 1993, the University of Nevada System Computing Services group developed Veronica. It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges. The Lone Wanderer The World Wide Web Wanderer, developed by Matthew Gray in 1993 was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database. Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained. In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways. ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos. Excite Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet. Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers. Excite was the first serious commercial search engine which launched in 1995. It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt and InfoSpace bought Excite for $10 million. Some of the first analysis of web searching was conducted on search logs from Excite Yahoo! In April 1994, two Stanford University Ph.D. candidates, David Filo and Jerry Yang, created some pages that became rather popular. They called the collection of pages Yahoo! Their official explanation for the name choice was that they considered themselves to be a pair of yahoos. As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory. The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites. Lycos At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU, developed the Lycos search engine. Types of web search engines Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks. Finding and selecting full or partial content based on the keywords provided. Maintaining index of the content and referencing to the location they find Allowing users to look for words or combinations of words found in that index. The process begins when a user enters a query statement into the system through the interface provided. There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index. In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated. So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for. One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing. Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking. Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google and Yahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity. Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings.
Technology
Internet
null
14341914
https://en.wikipedia.org/wiki/Jaekelopterus
Jaekelopterus
Jaekelopterus is a genus of predatory eurypterid, a group of extinct aquatic arthropods. Fossils of Jaekelopterus have been discovered in deposits of Early Devonian age, from the Pragian and Emsian stages. There are two known species: the type species J. rhenaniae from brackish to fresh water strata in the Rhineland, and J. howelli from estuarine strata in Wyoming. The generic name combines the name of German paleontologist Otto Jaekel, who described the type species, and the Greek word πτερόν (pteron) meaning "wing". Based on the isolated fossil remains of a large chelicera (claw) from the Klerf Formation of Germany, J. rhenaniae has been estimated to have reached a size of around 2.3–2.6 metres (7.5–8.5 ft), making it the largest arthropod ever discovered, surpassing other large arthropods such as fellow eurypterids Acutiramus and Pterygotus; the millipede Arthropleura. J. howelli was much smaller, reaching 80 centimetres (2.6 ft) in length. In overall appearance, Jaekelopterus is similar to other pterygotid eurypterids, possessing a large, expanded telson (the hindmost segment of the body) and enlarged pincers and forelimbs. Both species of Jaekelopterus were first described as species of the closely related Pterygotus but were raised as a separate genus based on an observed difference in the genital appendage. Though this feature has since proved to be a misidentification, other features distinguishing the genus from its relatives have been identified, including a telson with a triangular shape and a different inclination of the denticles of the claws. The chelicerae and compound eyes of Jaekelopterus indicate it was active and powerful with high visual acuity, most likely an apex predator in the ecosystems of Early Devonian Euramerica. Although eurypterids such as Jaekelopterus are often called "sea scorpions", the strata in which Jaekelopterus fossils have been found suggest that it lived in fresh water environments. Description Jaekelopterus is the largest known eurypterid and the largest known arthropod to have ever existed. This was determined based on a chelicera (claw) from the Emsian Klerf Formation of Willwerath, Germany, that measures long, but is missing a quarter of its length, suggesting that the full chelicera would have been long. If the ratio of body length to chelicera length matches those of other giant pterygotids, such as Acutiramus and Pterygotus, where the ratio between claw size and body length is relatively consistent, the organism that possessed the chelicera would have measured between in length. With the chelicerae extended, another metre would be added to this length. This estimate exceeds the maximum body size of all other known giant arthropods by almost half a metre even if the extended chelicerae are not included. Jaekelopterus is similar to other pterygotid eurypterids in its overall morphology, distinguished by its triangular telson (the hindmost segment of its body) and inclined principal denticles on its cheliceral rami (the moving part of the claws). The pterygotids, a group of highly derived ("advanced") eurypterids, differ from other groups in several features, especially in the chelicerae and the telson. The chelicerae of the Pterygotidae are enlarged and robust, clearly adapted for active prey capture, with chelae (pincers) more similar to the claws of some modern crustaceans, with well-developed teeth on the claws, relative to the chelicerae of other eurypterid groups. Another feature distinguishing the group from other eurypterid groups is their flattened and expanded telsons, likely used as rudders when swimming. J. howelli, known from over 30 specimens, has an almost identical pattern of denticulation on the chelicerae as J. rhenaniae and also preserves a flattened posterior margin of the telson, which results in a triangular shape, as in J. rhenaniae. Its serrated telson margin and the massive elongation of the second intermediate denticle clearly distinguishes it from J. rhenaniae. Furthermore, the type A genital appendage is not bifurcated at its end. J. howelli is much smaller than J. rhenaniae, reaching 80 centimetres (2.6 ft) in length. History of research Jaekelopterus was originally described as a species of Pterygotus, P. rhenaniae, in 1914 by German palaeontologist Otto Jaekel based on an isolated fossil pretelson (the segment directly preceding the telson) he received that had been discovered at Alken in Lower Devonian deposits of the Rhineland in Germany. Jaekel considered the pretelson to be characteristic of Pterygotus, other discovered elements differing little from previously known species of that genus, such as P. buffaloensis, and he estimated the length of the animal in life to be about 1 metre (1.5 metres if the chelicerae are included, 3.3 and 4.9 ft). Based on more comprehensive material, including genital appendages, chelicerae and fragments of the metastoma (a large plate that is part of the abdomen) and telson discovered by German palaeontologist Walter R. Gross near Overath, Germany, Norwegian palaeontologist Leif Størmer provided a more comprehensive and detailed description of the species in 1936. Størmer interpreted the genital appendages as being segmented, distinct from other species of Pterygotus. British palaeontologist Charles D. Waterston erected the genus Jaekelopterus in 1964 to accommodate Pterygotus rhenaniae, which he considered sufficiently distinct from other species of Pterygotus to warrant its own genus, primarily due to the abdominal appendages of Jaekelopterus being segmented as opposed to those of Pterygotus. Waterston diagnosed Jaekelopterus as a pterygotid with segmented genital appendages, a trapezoid prosoma, narrow and long chelicerae with terminal teeth almost at right angles to the rami and the primary teeth slightly angled anteriorly and with a telson with an expanded terminal spine and dorsal keel. The generic name honours Otto Jaekel; the Greek word πτερόν (pteron), meaning "wing", is a common epithet in eurypterid names. In 1974, Størmer erected a new family to house the genus, Jaekelopteridae, due to the supposed considerable differences between the genital appendage of Jaekelopterus and other pterygotids. This diverging feature has since been proven to simply represent a misinterpretation by Størmer in 1936, the genital appendage of Jaekelopterus in fact being unsegmented like that of Pterygotus. As such, the family Jaekelopteridae has subsequently been rejected and treated as synonymous with the family Pterygotidae. Another species of Pterygotus, P. howelli, was named by American palaeontologist Erik Kjellesvig-Waering and Størmer in 1952 based on a fossil telson and tergite (the dorsal part of a body segment) from Lower Devonian deposits of the Beartooth Butte Formation in Wyoming. The species name howelli honours Dr. Benjamin Howell of Princeton University, who loaned the fossil specimens examined in the description to Kjellesvig-Waering and Størmer. This species was assigned to Jaekelopterus as Jaekelopterus howelli by Norwegian palaeontologist O. Erik Tetlie in 2007. Classification Jaekelopterus is classified within the family Pterygotidae in the superfamily Pterygotioidea. Jaekelopterus is similar to Pterygotus, virtually only distinct in features of its genital appendage and potentially its telson. The close similarities between the two genera have prompted some researchers to question if the pterygotids are oversplit on the generic level. Based on some similarities in the genital appendage, American palaeontologists James C. Lamsdell and David A. Legg suggested in 2010 that Jaekelopterus, Pterygotus and even Acutiramus could be synonyms of each other. Though differences have been noted in chelicerae, these structures were questioned as the basis of generic distinctions in eurypterids by Charles D. Waterston in 1964 since their morphology is dependent on lifestyle and varies throughout ontogeny (the development of the organism following its birth). Whilst telson morphology can be used to distinguish genera in eurypterids, Lamsdell and Legg noted that the triangular telson of Jaekelopterus might still fall within the morphological range of the paddle-shaped telsons present in Pterygotus and Acutiramus. Genital appendages can vary even within genera; for instance, the genital appendage of Acutiramus changes from species to species, being spoon-shaped in earlier species and then becoming bilobed and eventually beginning to look similar to the appendage of Jaekelopterus. Lamsdell and Legg concluded that an inclusive phylogenetic analysis with multiple species of Acutiramus, Pterygotus and Jaekelopterus is required to resolve whether the genera are synonyms of each other. The cladogram below is based on the nine best-known pterygotid species and two outgroup taxa (Slimonia acuminata and Hughmilleria socialis). Jaekelopterus had previously been classified as a basal sister taxon to the rest of the Pterygotidae since its description as a separate genus by Waterston in 1964 due to its supposedly segmented genital appendages (fused and undivided in other pterygotids), but restudy of the specimens in question revealed that the genital appendage of Jaekelopterus also was undivided. The material examined and phylogenetic analysis conducted by British palaeontologist Simon J. Braddy, German palaeontologist Markus Poschmann and O. Erik Tetlie in 2007 revealed that Jaekelopterus was not a basal pterygotid, but one of the most derived taxa in the group. The cladogram also contains the maximum sizes reached by the species in question, which was suggested to possibly have been an evolutionary trait of the group per Cope's rule ("phyletic gigantism") by Braddy, Poschmann and Tetlie. Palaeobiology Gigantism The pterygotid eurypterids include many of the largest known eurypterids, such as Pterygotus and Acutiramus. Several factors have been suggested that might have contributed to the unprecedented large size of Jaekelopterus, its relatives and other large Paleozoic invertebrates, such as predation, courtship behaviour, competition and environmental resources. Factors such as respiration, the energy costs of moulting, locomotion and the actual properties of the exoskeleton restrict the size of arthropods. Other than the robust and heavily sclerotised claws, most of the preserved large body segments of the pterygotids are thin and unmineralised. Even tergites and sternites (the plates that form the surfaces of the abdominal segments) are generally preserved as paper-thin compressions, suggesting that pterygotids were very lightweight in construction. Similar lightweight adaptations can be observed in other Paleozoic giant arthropods, such as the giant millipede-like Arthropleura, and it has been suggested to be vital for the evolution of giant arthropod sizes. A lightweight build decreases the influence of factors that restrict body size. Despite being the largest arthropods, the lightweight build of Jaekelopterus and other giant pterygotid eurypterids meant they likely were not the heaviest. Other giant eurypterids, particularly the deep-bodied walking forms in the Hibbertopteridae, such as the almost 2-metre-long Hibbertopterus, may have rivalled the pterygotids and other giant arthropods in weight, if not surpassed them. American palaeontologist Alexander Kaiser and South African palaeontologist Jaco Klok suggested in 2008 that the massive size estimates for Jaekelopterus are exaggerated, noting that the size estimates assume that the relative proportions between the chelicerae and body length would stay the same as the animal matured. The denticles (the serrations of the claws) were observed as showing positive allometry (being proportionally larger in larger specimens), which Kaiser and Klok suggest could have occurred in the chelicerae as a whole. Furthermore, the largest coxae (limb segments) found of the same species, measuring wide, suggest a total maximum body length of only . Positive allometry has not been demonstrated in eurypterid chelicerae as a whole in any other eurypterid genus, including in the closest relatives of Jaekelopterus. There are also some undescribed specimens of J. rhenaniae similar in proportions to the large chelicera, including another claw found in the same strata as the original find. In the opinion of Braddy, Poschmann and Tetlie, who replied to Kaiser and Klok the same year, the size estimates around remain the most accurate estimates on the maximum size of the species yet. Ontogeny Like all other arthropods, eurypterids matured through a sequence of stages called "instars" consisting of periods of ecdysis (moulting) followed by rapid growth. Unlike many arthropods, such as insects and crustaceans, chelicerates (the group to which eurypterids like Jaekelopterus belongs, alongside other organisms such as horseshoe crabs, sea spiders and arachnids) are generally direct developers, meaning that there are no extreme morphological changes after they have hatched. Extant xiphosurans hatch without the full complement of adult opisthosomal appendages (appendages attached to the opisthosoma, the posterior segments of the body), but extant spiders are fully direct developers. Studies of fossil specimens of Strobilopterus and Jaekelopterus suggest that the ontogeny of eurypterids broadly parallelled that of modern horseshoe crabs, but that eurypterids (like arachnids) were true direct developers, hatching with the same number of appendages and segments as adults. Though several fossilised instars of Jaekelopterus howelli are known, the fragmentary and incomplete status of the specimens makes it difficult to study its ontogeny in detail. Despite this, there are some noticeable changes occurring in the chelicerae, telson and metastoma. Four of the J. howelli specimens studied by Lamsdell and Selden (2013) preserve the chelicerae in enough detail to allow for study of the denticles. Two of these chelicerae were assumed to come from juveniles and two were assumed to be from adults. The morphology of the chelicerae is similar across all ages, with the same arrangement and number of denticles, but there were also some noticeable differences. Particularly, the principal denticles grew in size relative to the intermediate denticles, being 1.5 times the size of the intermediate denticles in juveniles, but up to 3.5 times the size of the intermediate denticles in adults. Furthermore, the terminal denticle was far larger and more robust in adult specimens than in juveniles. Perhaps most extreme of all, the second intermediate denticle is not different in size from the other intermediate denticles in juveniles, but it is massively elongated in adults, where it is more than twice the length of any principal denticle. Though such growth in the denticles of pterygotids has been described in other genera, the massive elongation of the second intermediate denticle through ontogeny is unique to Jaekelopterus, particularly to J. howelli. The metastoma of Jaekelopterus also altered its dimensions as the animal matured. In J. rhenaniae, the relative width of the metastoma decreased through ontogeny. The metastoma in J. howelli is also broader in juveniles than in adults, although the length–width ratios measured in juveniles and adults were not as disparate as assumed, being 1.43 in juveniles and 1.46 in adults. Such a change in metastomal dimensions has been noted in other eurypterid genera as well, such as Stoermeropterus, Moselopterus and Strobilopterus. Visual system The cheliceral morphology and visual acuity of the pterygotid eurypterids separates them into distinct ecological groups. The primary method for determining visual acuity in arthropods is by determining the number of lenses in their compound eyes and the interommatidial angle (IOA), which is the angle between the optical axes of adjacent lenses. The IOA is especially important as it can be used to distinguish different ecological roles in arthropods, being low in modern active arthropod predators. Both Jaekelopterus rhenaniae and Pterygotus anglicus had high visual acuity, as suggested by the low IOA and many lenses in their compound eyes. Further studies on the compound eyes of fossilised specimens of J. rhenaniae, including a large specimen with the right eye preserved from the uppermost Siegenian and a small and likely juvenile specimen, confirmed the high visual acuity of the genus. The overall average IOA of Jaekelopterus (0.87°) is comparable to that of modern predatory arthropods. The visual acuity of Jaekelopterus increased with age, the smaller specimens having relatively worse eyesight. This is consistent with other pterygotids, such as Acutiramus, and has been interpreted as indicating that adult Jaekelopterus lived in darker environments, such as in deeper water. Trace fossil evidence of eurypterids also supports such a conclusion, indicating that eurypterids migrated to nearshore environments to mate and spawn. Jaekelopterus had a frontally overlapping visual field, i.e. stereoscopic vision, typical of predatory animals. Structurally, eurypterid eyes were almost identical to the eyes of horseshoe crabs. The square-like pattern of the receptor cells in the compound eyes of Jaekelopterus is also similar, but not identical, to the pattern in horseshoe crabs, suggesting a specialised visual system. The photoreceptors are unusually large in Jaekelopterus. At around 70 μm, they are far larger than those of humans (1-2 μm) and most arthropods (also 1-2 μm) but they match those of modern horseshoe crabs in size. The unique eyes of modern horseshoe crabs are highly distinct from eyes of other modern arthropods and allow increased edge-perception and enhance contrasts, important for animals in low and scattered light conditions. As the eyes of Jaekelopterus were very similar, it too likely had the same adaptations. With its highly specialised eyes, Jaekelopterus was very well adapted to its predatory lifestyle. Palaeoecology The morphology and body construction of Jaekelopterus and other eurypterids in the Pterygotidae suggests they were adapted to a completely aquatic lifestyle. Braddy, Poschmann and Tetlie considered in a 2007 study that it was highly unlikely that an arthropod with the size and build of Jaekelopterus would be able to walk on land. Eurypterids such as Jaekelopterus are often popularly referred to as "sea scorpions", but the deposits from which Jaekelopterus fossils have been discovered suggest that it lived in non-marine aquatic environments. The Beartooth Butte Formation in Wyoming, where J. howelli fossils have been discovered, has been interpreted as a quiet, shallow estuarine environment. This species has been found together with two other eurypterid species: Dorfopterus angusticollis and Strobilopterus princetonii. The fossil sites yielding J. rhenaniae in the Rhineland have also been interpreted as having been part of a shallow aquatic environment with brackish to fresh water. The chelicerae of Jaekelopterus are enlarged, robust and have a curved free ramus and denticles of different lengths and sizes, all adaptations that correspond to strong puncturing and grasping abilities in extant scorpions and crustaceans. Some puncture wounds on fossils of the poraspid agnathan fish Lechriaspis patula from the Devonian of Utah were likely caused by Jaekelopterus howelli. The latest research indicates that Jaekelopterus was an active and visual predator. Fully grown Jaekelopterus would have been apex predators in their environments and likely preyed upon smaller arthropods (including resorting to cannibalism) and early vertebrates. A powerful and active predator, Jaekelopterus was likely highly agile and possessed high maneuverability. The hydromechanics of the swimming paddles and telsons of Jaekelopterus and other pterygotids suggest that all members of the group were capable of hovering, forward locomotion and quick turns. Though they were not necessarily rapidly swimming animals, they were likely able to give chase to prey in habitats such as lagoons and estuaries.
Biology and health sciences
Fossil arthropods
Animals
41244
https://en.wikipedia.org/wiki/Hybrid%20%28biology%29
Hybrid (biology)
In biology, a hybrid is the offspring resulting from combining the qualities of two organisms of different varieties, subspecies, species or genera through sexual reproduction. Generally, it means that each cell has genetic material from two different organisms, whereas an individual where some cells are derived from a different organism is called a chimera. Hybrids are not always intermediates between their parents such as in blending inheritance (a now discredited theory in modern genetics by particulate inheritance), but can show hybrid vigor, sometimes growing larger or taller than either parent. The concept of a hybrid is interpreted differently in animal and plant breeding, where there is interest in the individual parentage. In genetics, attention is focused on the numbers of chromosomes. In taxonomy, a key question is how closely related the parent species are. Species are reproductively isolated by strong barriers to hybridization, which include genetic and morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization and others after it. Similar barriers exist in plants, with differences in flowering times, pollen vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and the structure of the chromosomes. A few animal species and many plant species, however, are the result of hybrid speciation, including important crop plants such as wheat, where the number of chromosomes has been doubled. A form of often intentional human-mediated hybridization is the crossing of wild and domesticated species. This is common in both traditional horticulture and modern agriculture; many commercially useful fruits, flowers, garden herbs, and trees have been produced by hybridization. One such flower, Oenothera lamarckiana, was central to early genetics research into mutationism and polyploidy. It is also more occasionally done in the livestock and pet trades; some well-known wild × domestic hybrids are beefalo and wolfdogs. Human selective breeding of domesticated animals and plants has also resulted in the development of distinct breeds (usually called cultivars in reference to plants); crossbreeds between them (without any wild stock) are sometimes also imprecisely referred to as "hybrids". Hybrid humans existed in prehistory. For example, Neanderthals and anatomically modern humans are thought to have interbred as recently as 40,000 years ago. Mythological hybrids appear in human culture in forms as diverse as the Minotaur, blends of animals, humans and mythical beasts such as centaurs and sphinxes, and the Nephilim of the Biblical apocrypha described as the wicked sons of fallen angels and attractive women. Significance In evolution Hybridization between species plays an important role in evolution, though there is much debate about its significance. Roughly 25% of plants and 10% of animals are known to form hybrids with at least one other species. One example of an adaptive benefit to hybridization is that hybrid individuals can form a "bridge" transmitting potentially helpful genes from one species to another when the hybrid backcrosses with one of its parent species, a process called introgression. Hybrids can also cause speciation, either because the hybrids are genetically incompatible with their parents and not each other, or because the hybrids occupy a different niche than either parent. Hybridization is a particularly common mechanism for speciation in plants, and is now known to be fundamental to the evolutionary history of plants. Plants frequently form polyploids, individuals with more than two copies of each chromosome. Whole genome doubling has occurred repeatedly in plant evolution. When two plant species hybridize, the hybrid may double its chromosome count by incorporating the entire nuclear genome of both parents, resulting in offspring that are reproductively incompatible with either parent because of different chromosome counts. In conservation Human impact on the environment has resulted in an increase in the interbreeding between regional species, and the proliferation of introduced species worldwide has also resulted in an increase in hybridization. This has been referred to as genetic pollution out of concern that it may threaten many species with extinction. Similarly, genetic erosion from monoculture in crop plants may be damaging the gene pools of many species for future breeding. The conservation impacts of hybridization between species are highly debated. While hybridization could potentially threaten rare species or lineages by "swamping" the genetically "pure" individuals with hybrids, hybridization could also save a rare lineage from extinction by introducing genetic diversity. It has been proposed that hybridization could be a useful tool to conserve biodiversity by allowing organisms to adapt, and that efforts to preserve the separateness of a "pure" lineage could harm conservation by lowering the organisms' genetic diversity and adaptive potential, particularly in species with low populations. While endangered species are often protected by law, hybrids are often excluded from protection, resulting in challenges to conservation. Etymology The term hybrid is derived from Latin , used for crosses such as of a tame sow and a wild boar. The term came into popular use in English in the 19th century, though examples of its use have been found from the early 17th century. Conspicuous hybrids are popularly named with portmanteau words, starting in the 1920s with the breeding of tiger–lion hybrids (liger and tigon). As seen by different disciplines Animal and plant breeding From the point of view of animal and plant breeders, there are several kinds of hybrid formed from crosses within a species, such as between different breeds. Single cross hybrids result from the cross between two true-breeding organisms which produces an F1 hybrid (first filial generation). The cross between two different homozygous lines produces an F1 hybrid that is heterozygous; having two alleles, one contributed by each parent and typically one is dominant and the other recessive. Typically, the F1 generation is also phenotypically homogeneous, producing offspring that are all similar to each other. Double cross hybrids result from the cross between two different F1 hybrids (i.e., there are four unrelated grandparents). Three-way cross hybrids result from the cross between an F1 hybrid and an inbred line. Triple cross hybrids result from the crossing of two different three-way cross hybrids. Top cross (or "topcross") hybrids result from the crossing of a top quality or pure-bred male and a lower quality female, intended to improve the quality of the offspring, on average. Population hybrids result from the crossing of plants or animals in one population with those of another population. These include interspecific hybrids or crosses between different breeds. In biology, the result of crossing of two populations is called a synthetic population. In horticulture, the term stable hybrid is used to describe an annual plant that, if grown and bred in a small monoculture free of external pollen (e.g., an air-filtered greenhouse) produces offspring that are "true to type" with respect to phenotype; i.e., a true-breeding organism. Biogeography Hybridization can occur in the hybrid zones where the geographical ranges of species, subspecies, or distinct genetic lineages overlap. For example, the butterfly Limenitis arthemis has two major subspecies in North America, L. a. arthemis (the white admiral) and L. a. astyanax (the red-spotted purple). The white admiral has a bright, white band on its wings, while the red-spotted purple has cooler blue-green shades. Hybridization occurs between a narrow area across New England, southern Ontario, and the Great Lakes, the "suture region". It is at these regions that the subspecies were formed. Other hybrid zones have formed between described species of plants and animals. Genetics From the point of view of genetics, several different kinds of hybrid can be distinguished. A genetic hybrid carries two different alleles of the same gene, where for instance one allele may code for a lighter coat colour than the other. A structural hybrid results from the fusion of gametes that have differing structure in at least one chromosome, as a result of structural abnormalities. A numerical hybrid results from the fusion of gametes having different haploid numbers of chromosomes. A permanent hybrid results when only the heterozygous genotype occurs, as in Oenothera lamarckiana, because all homozygous combinations are lethal. In the early history of genetics, Hugo de Vries supposed these were caused by mutation. Genetic complementation Genetic complementation is a hybridization test widely used in genetics to determine whether two separately isolated mutants that have the same (or similar) phenotype are defective in the same gene or in different genes (see complementation). If a hybrid organism containing the genomes of two different mutant parental organisms displays a wild type phenotype, it is ordinarily considered that the two parental mutant organisms are defective in different genes. If the hybrid organism displays a distinctly mutant phenotype, the two mutant parental organisms are considered to be defective in the same gene. However, in some cases the hybrid organism may display a phenotype that is only weakly (or partially) wild-type, and this may reflect intragenic (interallelic) complementation. Taxonomy From the point of view of taxonomy, hybrids differ according to their parentage. Hybrids between different subspecies (such as between the dog and Eurasian wolf) are called intra-specific hybrids. Interspecific hybrids are the offspring from interspecies mating; these sometimes result in hybrid speciation. Intergeneric hybrids result from matings between different genera, such as between sheep and goats. Interfamilial hybrids, such as between chickens and guineafowl or pheasants, are reliably described but extremely rare. Interordinal hybrids (between different orders) are few, but have been engineered between the sea urchin Strongylocentrotus purpuratus (female) and the sand dollar Dendraster excentricus (male). Biology Expression of parental traits When two distinct types of organisms breed with each other, the resulting hybrids typically have intermediate traits (e.g., one plant parent has red flowers, the other has white, and the hybrid, pink flowers). Commonly, hybrids also combine traits seen only separately in one parent or the other (e.g., a bird hybrid might combine the yellow head of one parent with the orange belly of the other). Mechanisms of reproductive isolation Interspecific hybrids are bred by mating individuals from two species, normally from within the same genus. The offspring display traits and characteristics of both parents, but are often sterile, preventing gene flow between the species. Sterility is often attributed to the different number of chromosomes between the two species. For example, donkeys have 62 chromosomes, horses have 64 chromosomes, and mules or hinnies have 63 chromosomes. Mules, hinnies, and other normally sterile interspecific hybrids cannot produce viable gametes, because differences in chromosome structure prevent appropriate pairing and segregation during meiosis, meiosis is disrupted, and viable sperm and eggs are not formed. However, fertility in female mules has been reported with a donkey as the father. A variety of mechanisms limit the success of hybridization, including the large genetic difference between most species. Barriers include morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization; others after it. In plants, some barriers to hybridization include blooming period differences, different pollinator vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and structural differences of the chromosomes. Speciation A few animal species are the result of hybridization. The Lonicera fly is a natural hybrid. The American red wolf appears to be a hybrid of the gray wolf and the coyote, although its taxonomic status has been a subject of controversy. The European edible frog is a semi-permanent hybrid between pool frogs and marsh frogs; its population requires the continued presence of at least one of the parent species. Cave paintings indicate that the European bison is a natural hybrid of the aurochs and the steppe bison. Plant hybridization is more commonplace compared to animal hybridization. Many crop species are hybrids, including notably the polyploid wheats: some have four sets of chromosomes (tetraploid) or six (hexaploid), while other wheat species have (like most eukaryotic organisms) two sets (diploid), so hybridization events likely involved the doubling of chromosome sets, causing immediate genetic isolation. Hybridization may be important in speciation in some plant groups. However, homoploid hybrid speciation (not increasing the number of sets of chromosomes) may be rare: by 1997, only eight natural examples had been fully described. Experimental studies suggest that hybridization offers a rapid route to speciation, a prediction confirmed by the fact that early generation hybrids and ancient hybrid species have matching genomes, meaning that once hybridization has occurred, the new hybrid genome can remain stable. Many hybrid zones are known where the ranges of two species meet, and hybrids are continually produced in great numbers. These hybrid zones are useful as biological model systems for studying the mechanisms of speciation. Recently DNA analysis of a bear shot by a hunter in the Northwest Territories confirmed the existence of naturally occurring and fertile grizzly–polar bear hybrids. Hybrid vigour Hybridization between reproductively isolated species often results in hybrid offspring with lower fitness than either parental. However, hybrids are not, as might be expected, always intermediate between their parents (as if there were blending inheritance), but are sometimes stronger or perform better than either parental lineage or variety, a phenomenon called heterosis, hybrid vigour, or heterozygote advantage. This is most common with plant hybrids. A transgressive phenotype is a phenotype that displays more extreme characteristics than either of the parent lines. Plant breeders use several techniques to produce hybrids, including line breeding and the formation of complex hybrids. An economically important example is hybrid maize (corn), which provides a considerable seed yield advantage over open pollinated varieties. Hybrid seed dominates the commercial maize seed market in the United States, Canada and many other major maize-producing countries. In a hybrid, any trait that falls outside the range of parental variation (and is thus not simply intermediate between its parents) is considered heterotic. Positive heterosis produces more robust hybrids, they might be stronger or bigger; while the term negative heterosis refers to weaker or smaller hybrids. Heterosis is common in both animal and plant hybrids. For example, hybrids between a lion and a tigress ("ligers") are much larger than either of the two progenitors, while "tigons" (lioness × tiger) are smaller. Similarly, the hybrids between the common pheasant (Phasianus colchicus) and domestic fowl (Gallus gallus) are larger than either of their parents, as are those produced between the common pheasant and hen golden pheasant (Chrysolophus pictus). Spurs are absent in hybrids of the former type, although present in both parents. Human influence Anthropogenic hybridization Hybridization is greatly influenced by human impact on the environment, through effects such as habitat fragmentation and species introductions. Such impacts make it difficult to conserve the genetics of populations undergoing introgressive hybridization. Humans have introduced species worldwide to environments for a long time, both intentionally for purposes such as biological control, and unintentionally, as with accidental escapes of individuals. Introductions can drastically affect populations, including through hybridization. Management There is a kind of continuum with three semi-distinct categories dealing with anthropogenic hybridization: hybridization without introgression, hybridization with widespread introgression (backcrossing with one of the parent species), and hybrid swarms (highly variable populations with much interbreeding as well as backcrossing with the parent species). Depending on where a population falls along this continuum, the management plans for that population will change. Hybridization is currently an area of great discussion within wildlife management and habitat management. Global climate change is creating other changes such as difference in population distributions which are indirect causes for an increase in anthropogenic hybridization. Conservationists disagree on when is the proper time to give up on a population that is becoming a hybrid swarm, or to try and save the still existing pure individuals. Once a population becomes a complete mixture, the goal becomes to conserve those hybrids to avoid their loss. Conservationists treat each case on its merits, depending on detecting hybrids within the population. It is nearly impossible to formulate a uniform hybridization policy, because hybridization can occur beneficially when it occurs "naturally", and when hybrid swarms are the only remaining evidence of prior species, they need to be conserved as well. Genetic mixing and extinction Regionally developed ecotypes can be threatened with extinction when new alleles or genes are introduced that alter that ecotype. This is sometimes called genetic mixing. Hybridization and introgression, which can happen in natural and hybrid populations, of new genetic material can lead to the replacement of local genotypes if the hybrids are more fit and have breeding advantages over the indigenous ecotype or species. These hybridization events can result from the introduction of non-native genotypes by humans or through habitat modification, bringing previously isolated species into contact. Genetic mixing can be especially detrimental for rare species in isolated habitats, ultimately affecting the population to such a degree that none of the originally genetically distinct population remains. Effect on biodiversity and food security In agriculture and animal husbandry, the Green Revolution's use of conventional hybridization increased yields by breeding high-yielding varieties. The replacement of locally indigenous breeds, compounded with unintentional cross-pollination and crossbreeding (genetic mixing), has reduced the gene pools of various wild and indigenous breeds resulting in the loss of genetic diversity. Since the indigenous breeds are often well-adapted to local extremes in climate and have immunity to local pathogens, this can be a significant genetic erosion of the gene pool for future breeding. Therefore, commercial plant geneticists strive to breed "widely adapted" cultivars to counteract this tendency. Different taxa In animals Mammals Familiar examples of equid hybrids are the mule, a cross between a female horse and a male donkey, and the hinny, a cross between a female donkey and a male horse. Pairs of complementary types like the mule and hinny are called reciprocal hybrids. Polar bears and brown bears are another case of a hybridizing species pairs, and introgression among non-sister species of bears appears to have shaped the Ursidae family tree. Among many other mammal crosses are hybrid camels, crosses between a bactrian camel and a dromedary. There are many examples of felid hybrids, including the liger. The oldest-known animal hybrid bred by humans is the kunga equid hybrid produced as a draft animal and status symbol 4,500 years ago in Umm el-Marra, present-day Syria. The first known instance of hybrid speciation in marine mammals was discovered in 2014. The clymene dolphin (Stenella clymene) is a hybrid of two Atlantic species, the spinner and striped dolphins. In 2019, scientists confirmed that a skull found 30 years earlier was a hybrid between the beluga whale and narwhal, dubbed the narluga. Birds Hybridization between species is common in birds. Hybrid birds are purposefully bred by humans, but hybridization is also common in the wild. Waterfowl have a particularly high incidence of hybridization, with at least 60% of species known to produce hybrids with another species. Among ducks, mallards widely hybridize with many other species, and the genetic relationships between ducks are further complicated by the widespread gene flow between wild and domestic mallards. One of the most common interspecific hybrids in geese occurs between Greylag and Canada geese (Anser anser x Branta canadensis). One potential mechanism for the occurrence of hybrids in these geese is interspecific nest parasitism, where an egg is laid in the nest of another species to be raised by non-biological parents. The chick imprints upon and eventually seeks a mate among the species that raised it, instead of the species of its biological parents. Cagebird breeders sometimes breed bird hybrids known as mules between species of finch, such as goldfinch × canary. Amphibians Among amphibians, Japanese giant salamanders and Chinese giant salamanders have created hybrids that threaten the survival of Japanese giant salamanders because of competition for similar resources in Japan. Fish Among fish, a group of about 50 natural hybrids between Australian blacktip shark and the larger common blacktip shark was found by Australia's eastern coast in 2012. Russian sturgeon and American paddlefish were hybridized in captivity when sperm from the paddlefish and eggs from the sturgeon were combined, unexpectedly resulting in viable offspring. This hybrid is called a sturddlefish. Cephalochordates The two genera Asymmetron and Branchiostoma are able to produce viable hybrid offspring, even if none have lived into adulthood so far, despite the parents' common ancestor living tens of millions of years ago. Insects Among insects, so-called killer bees were accidentally created during an attempt to breed a strain of bees that would both produce more honey and be better adapted to tropical conditions. It was done by crossing a European honey bee and an African bee. The Colias eurytheme and C. philodice butterflies have retained enough genetic compatibility to produce viable hybrid offspring. Hybrid speciation may have produced the diverse Heliconius butterflies, but that is disputed. The two closely related harvester ant species Pogonomyrmex barbatus and Pogonomyrmex rugosus have evolved to depend on hybridization. When a queen fertilizes her eggs with sperm from males of her own species, the offspring is always new queens. And when she fertilizes the eggs with sperm from males of the other species, the offspring is always sterile worker ants (and because ants are haplodiploid, unfertilized eggs become males). Without mating with males of the other species, the queens are unable to produce workers, and will fail to establish a colony of their own. In plants Plant species hybridize more readily than animal species, and the resulting hybrids are fertile more often. Many plant species are the result of hybridization, combined with polyploidy, which duplicates the chromosomes. Chromosome duplication allows orderly meiosis and so viable seed can be produced. Plant hybrids are generally given names that include an "×" (not in italics), such as Platanus × hispanica for the London plane, a natural hybrid of P. orientalis (oriental plane) and P. occidentalis (American sycamore). The parent's names may be kept in their entirety, as seen in Prunus persica × Prunus americana, with the female parent's name given first, or if not known, the parent's names given alphabetically. Plant species that are genetically compatible may not hybridize in nature for various reasons, including geographical isolation, differences in flowering period, or differences in pollinators. Species that are brought together by humans in gardens may hybridize naturally, or hybridization can be facilitated by human efforts, such as altered flowering period or artificial pollination. Hybrids are sometimes created by humans to produce improved plants that have some of the characteristics of each of the parent species. Much work is now being done with hybrids between crops and their wild relatives to improve disease resistance or climate resilience for both agricultural and horticultural crops. Some crop plants are hybrids from different genera (intergeneric hybrids), such as Triticale, × Triticosecale, a wheat–rye hybrid. Most modern and ancient wheat breeds are themselves hybrids; bread wheat, Triticum aestivum, is a hexaploid hybrid of three wild grasses. Several commercial fruits including loganberry (Rubus × loganobaccus) and grapefruit (Citrus × paradisi) are hybrids, as are garden herbs such as peppermint (Mentha × piperita), and trees such as the London plane (Platanus × hispanica). Among many natural plant hybrids is Iris albicans, a sterile hybrid that spreads by rhizome division, and Oenothera lamarckiana, a flower that was the subject of important experiments by Hugo de Vries that produced an understanding of polyploidy. Sterility in a non-polyploid hybrid is often a result of chromosome number; if parents are of differing chromosome pair number, the offspring will have an odd number of chromosomes, which leaves them unable to produce chromosomally balanced gametes. While that is undesirable in a crop such as wheat, for which growing a crop that produces no seeds would be pointless, it is an attractive attribute in some fruits. Triploid bananas and watermelons are intentionally bred because they produce no seeds and are also parthenocarpic. In fungi Hybridization between fungal species is common and well established, particularly in yeast. Yeast hybrids are widely found and used in human-related activities, such as brewing and winemaking. The production of lager beers for instance are known to be carried out by the yeast Saccharomyces pastorianus, a cryotolerant hybrid between Saccharomyces cerevisiae and Saccharomyces eubayanus, which allows fermentation at low temperatures. In humans There is evidence of hybridization between modern humans and other species of the genus Homo. In 2010, the Neanderthal genome project showed that 1–4% of DNA from all people living today, apart from most Sub-Saharan Africans, is of Neanderthal heritage. Analyzing the genomes of 600 Europeans and East Asians found that combining them covered 20% of the Neanderthal genome that is in the modern human population. Ancient human populations lived and interbred with Neanderthals, Denisovans, and at least one other extinct Homo species. Thus, Neanderthal and Denisovan DNA has been incorporated into human DNA by introgression. In 1998, a complete prehistorical skeleton found in Portugal, the Lapedo child, had features of both anatomically modern humans and Neanderthals. Some ancient human skulls with especially large nasal cavities and unusually shaped braincases represent human-Neanderthal hybrids. A 37,000- to 42,000-year-old human jawbone found in Romania's Oase cave contains traces of Neanderthal ancestry from only four to six generations earlier. All genes from Neanderthals in the current human population are descended from Neanderthal fathers and human mothers. Mythology Folk tales and myths sometimes contain mythological hybrids; the Minotaur was the offspring of a human, Pasiphaë, and a white bull. More often, they are composites of the physical attributes of two or more kinds of animals, mythical beasts, and humans, with no suggestion that they are the result of interbreeding, as in the centaur (man/horse), chimera (goat/lion/snake), hippocamp (fish/horse), and sphinx (woman/lion). The Old Testament mentions a first generation of half-human hybrid giants, the Nephilim, while the apocryphal Book of Enoch describes the Nephilim as the wicked sons of fallen angels and attractive women.
Biology and health sciences
Genetics and taxonomy
null
41316
https://en.wikipedia.org/wiki/Linear%20polarization
Linear polarization
In electrodynamics, linear polarization or plane polarization of electromagnetic radiation is a confinement of the electric field vector or magnetic field vector to a given plane along the direction of propagation. The term linear polarization (French: polarisation rectiligne) was coined by Augustin-Jean Fresnel in 1822. See polarization and plane of polarization for more information. The orientation of a linearly polarized electromagnetic wave is defined by the direction of the electric field vector. For example, if the electric field vector is vertical (alternately up and down as the wave travels) the radiation is said to be vertically polarized. Mathematical description The classical sinusoidal plane wave solution of the electromagnetic wave equation for the electric and magnetic fields is (cgs units) for the magnetic field, where k is the wavenumber, is the angular frequency of the wave, and is the speed of light. Here is the amplitude of the field and is the Jones vector in the x-y plane. The wave is linearly polarized when the phase angles are equal, . This represents a wave polarized at an angle with respect to the x axis. In that case, the Jones vector can be written . The state vectors for linear polarization in x or y are special cases of this state vector. If unit vectors are defined such that and then the polarization state can be written in the "x-y basis" as .
Physical sciences
Optics
Physics
41389
https://en.wikipedia.org/wiki/Multiplexing
Multiplexing
In telecommunications and computer networking, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource a physical transmission medium. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910. The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end. A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX or DMX). Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream. In computing, I/O multiplexing can also be used to refer to the concept of processing multiple input/output events from a single event loop, with system calls like poll and select (Unix). Types Multiple variable bit rate digital bit streams may be transferred efficiently over a single fixed bandwidth channel by means of statistical multiplexing. This is an asynchronous mode time-domain multiplexing which is a form of time-division multiplexing. Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such as frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS). In wireless communications, multiplexing can also be accomplished through alternating polarization (horizontal/vertical or clockwise/counterclockwise) on each adjacent channel and satellite, or through phased multi-antenna array combined with a multiple-input multiple-output communications (MIMO) scheme. Space-division multiplexing In wired communication, space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analog stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pair telephone cable, a switched star network such as a telephone access network, a switched Ethernet network, and a mesh network. In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming a phased array antenna. Examples are multiple-input and multiple-output (MIMO), single-input and multiple-output (SIMO) and multiple-input and single-output (MISO) multiplexing. An IEEE 802.11g wireless router with k antennas makes it in principle possible to communicate with k multiplexed channels, each with a peak bit rate of 54 Mbit/s, thus increasing the total peak bit rate by the factor k. Different antennas would give different multi-path propagation (echo) signatures, making it possible for digital signal processing techniques to separate different signals from each other. These techniques may also be utilized for space diversity (improved robustness to fading) or beamforming (improved selectivity) rather than multiplexing. Frequency-division multiplexing Frequency-division multiplexing (FDM) is inherently an analog technology. FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium. In FDM the signals are electrical signals. One of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's residential area, but the service provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the appropriate frequency (channel) to access the desired signal. A variant technology, called wavelength-division multiplexing (WDM) is used in optical communications. Time-division multiplexing Time-division multiplexing (TDM) is a digital (or in rare cases, analog) technology that uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, and in such a way that they can be associated with the appropriate receiver. If done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path. Consider an application requiring four terminals at an airport to reach a central computer. Each terminal communicated at 2400 baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the airport ticket desk back to the airline data center are also installed. Some web proxy servers (e.g. polipo) use TDM in HTTP pipelining of multiple HTTP transactions onto the same TCP/IP connection. Carrier-sense multiple access and multidrop communication methods are similar to time-division multiplexing in that multiple data streams are separated by time on the same medium, but because the signals have separate origins instead of being combined into a single signal, are best viewed as channel access methods, rather than a form of multiplexing. TD is a legacy multiplexing technology still providing the backbone of most National fixed-line telephony networks in Europe, providing the 2 Mbit/s voice and signaling ports on narrow-band telephone exchanges such as the DMS100. Each E1 or 2 Mbit/s TDM port provides either 30 or 31 speech timeslots in the case of CCITT7 signaling systems and 30 voice channels for customer-connected Q931, DASS2, DPNSS, V5 and CASS signaling systems. Polarization-division multiplexing Polarization-division multiplexing uses the polarization of electromagnetic radiation to separate orthogonal channels. It is in practical use in both radio and optical communications, particularly in 100 Gbit/s per channel fiber-optic transmission systems. Differential Cross-Polarized Wireless Communications is a novel method for polarized antenna transmission utilizing a differential technique. Orbital angular momentum multiplexing Orbital angular momentum multiplexing is a relatively new and experimental technique for multiplexing multiple channels of signals carried using electromagnetic radiation over a single path. It can potentially be used in addition to other physical multiplexing methods to greatly expand the transmission capacity of such systems. it is still in its early research phase, with small-scale laboratory demonstrations of bandwidths of up to 2.5 Tbit/s over a single light path. This is a controversial subject in the academic community, with many claiming it is not a new method of multiplexing, but rather a special case of space-division multiplexing. Code-division multiplexing Code-division multiplexing (CDM), code-division multiple access (CDMA) or spread spectrum is a class of techniques where several channels simultaneously share the same frequency spectrum, and this spectral bandwidth is much higher than the bit rate or symbol rate. One form is frequency hopping, another is direct sequence spread spectrum. In the latter case, each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Number of chips per bit, or chips per symbol, is the spreading factor. This coded transmission typically is accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber or radio channel or other medium, and asynchronously demultiplexed. Advantages over conventional techniques are that variable bandwidth is possible (just as in statistical multiplexing), that the wide bandwidth allows poor signal-to-noise ratio according to Shannon–Hartley theorem, and that multi-path propagation in wireless communication can be combated by rake receivers. A significant application of CDMA is the Global Positioning System (GPS). Multiple access method A multiplexing technique may be further extended into a multiple access method or channel access method, for example, TDM into time-division multiple access (TDMA) and statistical multiplexing into carrier-sense multiple access (CSMA). A multiple-access method makes it possible for several transmitters connected to the same physical medium to share their capacity. Multiplexing is provided by the physical layer of the OSI model, while multiple access also involves a media access control protocol, which is part of the data link layer. The Transport layer in the OSI model, as well as TCP/IP model, provides statistical multiplexing of several application layer data flows to/from the same computer. Code-division multiplexing (CDM) is a technique in which each channel transmits its bits as a coded channel-specific sequence of pulses. This coded transmission is typically accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber and asynchronously demultiplexed. Other widely used multiple access techniques are time-division multiple access (TDMA) and frequency-division multiple access (FDMA). Code-division multiplex techniques are used as an access technology, namely code-division multiple access (CDMA), in Universal Mobile Telecommunications System (UMTS) standard for the third-generation (3G) mobile communication identified by the ITU. Application areas Telegraphy The earliest communication technology using electrical wires, and therefore sharing an interest in the economies afforded by multiplexing, was the electric telegraph. Early experiments allowed two separate messages to travel in opposite directions simultaneously, first using an electric battery at both ends, then at only one end. Émile Baudot developed a time-multiplexing system of multiple Hughes machines in the 1870s. In 1874, the quadruplex telegraph developed by Thomas Edison transmitted two messages in each direction simultaneously, for a total of four messages transiting the same wire at the same time. Several researchers were investigating acoustic telegraphy, a frequency-division multiplexing technique, which led to the invention of the telephone. Telephony In telephony, a customer's telephone line now typically ends at the remote concentrator box, where it is multiplexed along with other telephone lines for that neighborhood or other similar area. The multiplexed signal is then carried to the central switching office on significantly fewer wires and for much further distances than a customer's line can practically go. This is likewise also true for digital subscriber lines (DSL). Fiber in the loop (FITL) is a common method of multiplexing, which uses optical fiber as the backbone. It not only connects POTS phone lines with the rest of the PSTN, but also replaces DSL by connecting directly to Ethernet wired into the home. Asynchronous Transfer Mode is often the communications protocol used. Cable TV has long carried multiplexed television channels, and late in the 20th century began offering the same services as telephone companies. IPTV also depends on multiplexing. Video processing In video editing and processing systems, multiplexing refers to the process of interleaving audio and video into one coherent data stream. In digital video, such a transport stream is normally a feature of a container format which may include metadata and other information, such as subtitles. The audio and video streams may have variable bit rate. Software that produces such a transport stream and/or container is commonly called a multiplexer or muxer. A demuxer is software that extracts or otherwise makes available for separate processing the components of such a stream or container. Digital broadcasting In digital television systems, several variable bit-rate data streams are multiplexed together to a fixed bit-rate transport stream by means of statistical multiplexing. This makes it possible to transfer several video and audio channels simultaneously over the same frequency channel, together with various services. This may involve several standard-definition television (SDTV) programs (particularly on DVB-T, DVB-S2, ISDB and ATSC-C), or one HDTV, possibly with a single SDTV companion channel over one 6 to 8 MHz-wide TV channel. The device that accomplishes this is called a statistical multiplexer. In several of these systems, the multiplexing results in an MPEG transport stream. The newer DVB standards DVB-S2 and DVB-T2 has the capacity to carry several HDTV channels in one multiplex. In digital radio, a multiplex (also known as an ensemble) is a number of radio stations that are grouped together. A multiplex is a stream of digital information that includes audio and other data. On communications satellites which carry broadcast television networks and radio networks, this is known as multiple channel per carrier or MCPC. Where multiplexing is not practical (such as where there are different sources using a single transponder), single channel per carrier mode is used. Analog broadcasting In FM broadcasting and other analog radio media, multiplexing is a term commonly given to the process of adding subcarriers to the audio signal before it enters the transmitter, where modulation occurs. (In fact, the stereo multiplex signal can be generated using time-division multiplexing, by switching between the two (left channel and right channel) input signals at an ultrasonic rate (the subcarrier), and then filtering out the higher harmonics.) Multiplexing in this sense is sometimes known as MPX, which in turn is also an old term for stereophonic FM, seen on stereo systems since the 1960s. Other meanings In spectroscopy the term is used to indicate that the experiment is performed with a mixture of frequencies at once and their respective response unraveled afterward using the Fourier transform principle. In computer programming, it may refer to using a single in-memory resource (such as a file handle) to handle multiple external resources (such as on-disk files). Some electrical multiplexing techniques do not require a physical "multiplexer" device, they refer to a "keyboard matrix" or "Charlieplexing" design style: Multiplexing may refer to the design of a multiplexed display (non-multiplexed displays are immune to break up). Multiplexing may refer to the design of a "switch matrix" (non-multiplexed buttons are immune to "phantom keys" and also immune to "phantom key blocking"). In high-throughput DNA sequencing, the term is used to indicate that some artificial sequences (often called barcodes or indexes) have been added to link given sequence reads to a given sample, and thus allow for the sequencing of multiple samples in the same reaction. In sociolinguistics, multiplexity is used to describe the number of distinct connections between individuals who are part of a social network. A multiplex network is one in which members share a number of ties stemming from more than one social context, such as workmates, neighbors, or relatives.
Technology
Signal processing
null
41402
https://en.wikipedia.org/wiki/Neper
Neper
The neper (symbol: Np) is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI. Definition Like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the decadic (base-10) logarithm to compute ratios, the neper uses the natural logarithm, based on Euler's number (). The level of a ratio of two signal amplitudes or root-power quantities, with the unit neper, is given by where and are the signal amplitudes, and is the natural logarithm. The level of a ratio of two power quantities, with the unit neper, is given by where and are the signal powers. In the International System of Quantities, the neper is defined as . Units The neper is defined in terms of ratios of field quantities — also called root-power quantities — (for example, voltage or current amplitudes in electrical circuits, or pressure in acoustics), whereas the decibel was originally defined in terms of power ratios. A power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power in a linear system is proportional to the square (Joule's laws) of the amplitude. Hence the decibel and the neper have a fixed ratio to each other: and The (voltage) level ratio is Like the decibel, the neper is a dimensionless unit. The International Telecommunication Union (ITU) recognizes both units. Only the neper is coherent with the SI. Applications The neper is a natural linear unit of relative difference, meaning in nepers (logarithmic units) relative differences add rather than multiply. This property is shared with logarithmic units in other bases, such as the bel. The derived units decineper (1 dNp = 0.1 neper) and centineper (1 cNp = 0.01 neper) are also used. The centineper for root-power quantities corresponds to a log point or log percentage, see .
Physical sciences
Ratio
Basics and measurement
41413
https://en.wikipedia.org/wiki/Network%20topology
Network topology
Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks. Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of the physical layer of the OSI model. Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology. Topologies Two basic categories of network topologies exist, physical topologies and logical topologies. The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits. In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, Avionics Full-Duplex Switched Ethernet (AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, the AFDX virtual links are modeled as time-switched single-transmitter bus connections, thus following the safety model of a single-transmitter bus topology previously used in aircraft. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches. Links The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cables (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer. A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data. Wired technologies The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed. Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation between the conductors helps maintain the characteristic impedance of the cable which can help improve its performance. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network. Signal traces on printed circuit boards are common for board-level serial communication, particularly between certain types integrated circuits, a common example being SPI. Ribbon cable (untwisted and possibly unshielded) has been a cost-effective media for serial protocols, especially within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with flat or ribbon cable, or a hybrid flat and twisted ribbon cable, should EMC, length, and bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc. Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted pair (STP). Each form comes in several category ratings, designed for use in various scenarios. An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations. Wireless technologies Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately apart. Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geostationary orbit above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area. Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. Exotic technologies There have been various attempts at transporting data over exotic media: IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001. Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet. Both cases have a large round-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information. Nodes Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, one RS-232 transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g., ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g., CAN can have many transceivers connected to a single bus). While the conventional system building blocks of a computer network include network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, gateways, and firewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology. Network interfaces A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole. In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. Repeaters and hubs A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal may be reformed or retransmitted at a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extended RS-232 segments from 15 meters to over a kilometer. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule. A repeater with multiple ports is known as hub, an Ethernet hub in Ethernet networks, a USB hub in USB networks. USB networks use hubs to form tiered-star topologies. Ethernet hubs and repeaters in LANs have been mostly obsoleted by modern switches. Bridges A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. Bridges come in three basic types: Local bridges: Directly connect LANs Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers. Wireless bridges: Can be used to join LANs or connect remote devices to LANs. Switches A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches. Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier). Routers A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a black hole because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped. Modems Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a digital subscriber line technology. Firewalls A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks. Classification The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain. Point-to-point The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. A child's tin can telephone is one example of a physical dedicated channel. Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe's Law. Daisy chain Daisy chaining is accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring. A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters. By connecting the computers at each end of the chain, a ring topology can be formed. When a node sends a message, the message is processed by each computer in the ring. An advantage of the ring is that the number of transmitters and receivers can be cut in half. Since a message will eventually loop all of the way around, transmission does not need to go both directions. Alternatively, the ring can be used to improve fault tolerance. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure. Bus In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as the backbone, or trunk – all data transmission between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously. A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be the single point of failure of the network. In this topology data being transferred may be accessed by any node. Linear bus In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called a terminator. Distributed bus In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium. Star In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as a signal repeater. The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters. Extended star The extended star network topology extends a physical star topology by one or more repeaters between the central node and the peripheral (or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based. A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies. A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from a tree topology in the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network. Distributed star A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes'). Ring A ring topology is a daisy chain in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus. Advantages: When the load on the network increases, its performance is better than bus topology. There is no need of network server to control the connectivity between workstations. Disadvantages: Aggregate network bandwidth is bottlenecked by the weakest link between two nodes. Mesh The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law. Fully connected network In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes: This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network. Partially connected network In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network. Hybrid Hybrid topology is also known as hybrid network. Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected. A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub. Snowflake topology is meshed at the core, but tree shaped at the edges. Two other hybrid network types are hybrid mesh and hierarchical star. Centralization The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes. If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems. A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree structure has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest. To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will learn the layout of the network by listening on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only. Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node. A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop. Decentralization In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance. A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications.
Technology
Networks
null
41415
https://en.wikipedia.org/wiki/Noise
Noise
Noise is sound, chiefly unwanted, unintentional, or harmful sound considered unpleasant, loud, or disruptive to mental or hearing faculties. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrations through a medium, such as air or water. The difference arises when the brain receives and perceives a sound. Acoustic noise is any sound in the acoustic domain, either deliberate (e.g., music or speech) or unintended. In contrast, noise in electronics may not be audible to the human ear and may require instruments for detection. In audio engineering, noise can refer to the unwanted residual electronic noise signal that gives rise to acoustic noise heard as a hiss. This signal noise is commonly measured using A-weighting or ITU-R 468 weighting. In experimental sciences, noise can refer to any random fluctuations of data that hinders perception of a signal. Measurement Sound is measured based on the amplitude and frequency of a sound wave. Amplitude measures how forceful the wave is. The energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave. Decibels are expressed in a logarithmic scale. On the other hand, pitch describes the frequency of a sound and is measured in hertz (Hz). The main instrument to measure sounds in the air is the Sound Level Meter. There are many different varieties of instruments that are used to measure noise - Noise Dosimeters are often used in occupational environments, noise monitors are used to measure environmental noise and noise pollution, and recently smartphone-based sound level meter applications (apps) are being used to crowdsource and map recreational and community noise. A-weighting is applied to a sound spectrum to represent the sound that humans are capable of hearing at each frequency. Sound pressure is thus expressed in terms of dBA. 0 dBA is the softest level that a person can hear. Normal speaking voices are around 65 dBA. A rock concert can be about 120 dBA. Recording and reproduction In audio, recording, and broadcast systems, audio noise refers to the residual low-level sound (four major types: hiss, rumble, crackle, and hum) that is heard in quiet periods of program. This variation from the expected pure sound or silence can be caused by the audio recording equipment, the instrument, or ambient noise in the recording room. In audio engineering it can refer either to the acoustic noise from loudspeakers or to the unwanted residual electronic noise signal that gives rise to acoustic noise heard as hiss. This signal noise is commonly measured using A-weighting or ITU-R 468 weighting Noise is often generated deliberately and used as a test signal for audio recording and reproduction equipment. Environmental noise Environmental noise is the accumulation of all noise present in a specified environment. The principal sources of environmental noise are surface motor vehicles, aircraft, trains and industrial sources. These noise sources expose millions of people to noise pollution that creates not only annoyance, but also significant health consequences such as elevated incidence of hearing loss, cardiovascular disease, and many others. Urban noise is generally not of an intensity that causes hearing loss but it interrupts sleep, disturbs communication and interferes with other human activities. There are a variety of mitigation strategies and controls available to reduce sound levels including source intensity reduction, land-use planning strategies, noise barriers and sound baffles, time of day use regimens, vehicle operational controls and architectural acoustics design measures. Regulation Certain geographic areas or specific occupations may be at a higher risk of being exposed to constantly high levels of noise; regulation may prevent negative health outcomes. Noise regulation includes statutes or guidelines relating to sound transmission established by national, state or provincial and municipal levels of government. Environmental noise is governed by laws and standards which set maximum recommended levels of noise for specific land uses, such as residential areas, areas of outstanding natural beauty, or schools. These standards usually specify measurement using a weighting filter, most often A-weighting. United States In 1972, the Noise Control Act was passed to promote a healthy living environment for all Americans, where noise does not pose a threat to human health. This policy's main objectives were: (1) establish coordination of research in the area of noise control, (2) establish federal standards on noise emission for commercial products, and (3) promote public awareness about noise emission and reduction. The Quiet Communities Act of 1978 promotes noise control programs at the state and local level and developed a research program on noise control. Both laws authorized the Environmental Protection Agency to study the effects of noise and evaluate regulations regarding noise control. The National Institute for Occupational Safety and Health (NIOSH) provides recommendation on noise exposure in the workplace. In 1972 (revised in 1998), NIOSH published a document outlining recommended standards relating to the occupational exposure to noise, with the purpose of reducing the risk of developing permanent hearing loss related to exposure at work. This publication set the recommended exposure limit (REL) of noise in an occupation setting to 85 dBA for 8 hours using a 3-dB exchange rate (every 3-dB increase in level, duration of exposure should be cut in half, i.e., 88 dBA for 4 hours, 91 dBA for 2 hours, 94 dBA for 1 hour, etc.). However, in 1973 the Occupational Safety and Health Administration (OSHA) maintained the requirement of an 8-hour average of 90 dBA. The following year, OSHA required employers to provide a hearing conservation program to workers exposed to 85 dBA average 8-hour workdays. Europe The European Environment Agency regulates noise control and surveillance within the European Union. The Environmental Noise Directive was set to determine levels of noise exposure, increase public access to information regarding environmental noise, and reduce environmental noise. Additionally, in the European Union, underwater noise is a pollutant according to the Marine Strategy Framework Directive (MSFD). The MSFD requires EU Member States to achieve or maintain Good Environmental Status, meaning that the "introduction of energy, including underwater noise, is at levels that do not adversely affect the marine environment". Health effects Exposure to noise is associated with several negative health outcomes. Depending on duration and level of exposure, noise may cause or increase the likelihood of hearing loss, high blood pressure, ischemic heart disease, sleep disturbances, injuries, and even decreased school performance. When noise is prolonged, the body's stress responses can be triggered; which can include increased heartbeat, and rapid breathing. There are also causal relationships between noise and psychological effects such as annoyance, psychiatric disorders, and effects on psychosocial well-being. Noise exposure has increasingly been identified as a public health issue, especially in an occupational setting, as demonstrated with the creation of NIOSH's Noise and Hearing Loss Prevention program. Noise has also proven to be an occupational hazard, as it is the most common work-related pollutant. Noise-induced hearing loss, when associated with noise exposure at the workplace is also called occupational hearing loss. For example, some occupational studies have shown a relation between those who are regularly exposed to noise above 85 decibels to have higher blood pressure than those who are not exposed. Hearing loss prevention While noise-induced hearing loss is permanent, it is also preventable. Particularly in the workplace, regulations may exist limiting permissible exposure limit to noise. This can be especially important for professionals working in settings with consistent exposure to loud sounds, such as musicians, music teachers and audio engineers. Examples of measures taken to prevent noise-induced hearing loss in the workplace include engineering noise control, the Buy-Quiet initiative, creation of the Safe-In-Sound award, and noise surveillance. OSHA requires the use of hearing protection. But the HPD (without individual selection, training and fit testing) does not significantly reduce the risk of hearing loss. For example, one study covered more than 19 thousand workers, some of whom usually used hearing protective devices, and some did not use them at all. There was no statistically significant difference in the risk of noise-induced hearing loss. Literary views Roland Barthes distinguishes between physiological noise, which is merely heard, and psychological noise, which is actively listened to. Physiological noise is felt subconsciously as the vibrations of the noise (sound) waves physically interact with the body while psychological noise is perceived as our conscious awareness shifts its attention to that noise. Luigi Russolo, one of the first composers of noise music, wrote the essay The Art of Noises. He argued that any kind of noise could be used as music, as audiences become more familiar with noises caused by technological advancements; noise has become so prominent that pure sound no longer exists. Avant-garde composer Henry Cowell claimed that technological advancements have reduced unwanted noises from machines, but have not managed so far to eliminate them. Felix Urban sees noise as a result of cultural circumstances. In his comparative study on sound and noise in cities, he points out that noise regulations are only one indicator of what is considered as harmful. It is the way in which people live and behave (acoustically) that determines the way how sounds are perceived.
Physical sciences
Waves
null
41458
https://en.wikipedia.org/wiki/Optical%20disc
Optical disc
An optical disc is a flat, usually disc-shaped object that stores information in the form of physical variations on its surface that can be read with the aid of a beam of light. Optical discs can be reflective, where the light source and detector are on the same side of the disc, or transmissive, where light shines through the disc to be detected on the other side. Optical discs can store analog information (e.g. Laserdisc), digital information (e.g. DVD), or store both on the same disc (e.g. CD Video). Their main uses are the distribution of media and data, and long-term archival. Design and technology The encoding material sits atop a thicker substrate (usually polycarbonate) that makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track. The data are stored on the disc with a laser or stamping machine, and can be accessed when the data path is illuminated with a laser diode in an optical disc drive that spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, and the distance of the read head from the center of the disc (outer tracks are read at a higher data speed due to higher linear velocities at the same angular velocities). Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by their grooves. This side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer. The reverse side of an optical disc usually has a printed label, sometimes made of paper but often printed or stamped onto the disc itself. Unlike the 3-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches, fingerprints, and other environmental problems. Blu-rays have a coating called durabis that mitigates these problems. Optical discs are usually between in diameter, with being the most common size. The so-called program area that contains the data commonly starts 25 millimetres away from the center point. A typical disc is about thick, while the track pitch (distance from the center of one track to the center of the next) ranges from 1.6 μm (for CDs) to 320 nm (for Blu-ray discs). Recording types An optical disc is designed to support one of three recording types: read-only (such as CD and CD-ROM), recordable (write-once, like CD-R), or re-recordable (rewritable, like CD-RW). Write-once optical discs commonly have an organic dye (may also be a (phthalocyanine) azo dye, mainly used by Verbatim, or an oxonol dye, used by Fujifilm) recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, and tellurium. Azo dyes were introduced in 1996 and phthalocyanine only began to see wide use in 2002. The type of dye and the material used on the reflective layer on an optical disc may be determined by shining a light through the disc, as different dye and material combinations have different colors. Blu-ray Disc recordable discs do not usually use an organic dye recording layer, instead using an inorganic recording layer. Those that do are known as low-to-high (LTH) discs and can be made in existing CD and DVD production lines, but are of lower quality than traditional Blu-ray recordable discs. File systems File systems specifically created for optical discs are ISO9660 and the Universal Disk Format (UDF). ISO9660 can be extended using the "Joliet" extension to store longer file names than standalone ISO9660. The "Rock Ridge" extension can store even longer file names and Unix/Linux-style file permissions, but is not recognized by Windows and by DVD players and similar devices that can read data discs. For cross-platform compatibility, multiple file systems can co-exist on one disc and reference the same files. Usage Optical discs are most commonly used for digital preservation, storing music (particularly for use in a CD player), video (such as for use in a Blu-ray player), or data and programs for personal computers (PC), as well as offline hard copy data distribution due to lower per-unit prices than other types of media. The Optical Storage Technology Association (OSTA) promoted standardized optical storage formats. Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player. File operations of traditional mass storage devices such as flash drives, memory cards and hard drives can be simulated using a UDF live file system. For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices, especially the USB flash drive. This trend is expected to continue as USB flash drives continue to increase in capacity and drop in price. Additionally, music, movies, games, software and TV shows purchased, shared or streamed over the Internet has significantly reduced the number of audio CDs, video DVDs and Blu-ray discs sold annually. However, audio CDs and Blu-rays are still preferred and bought by some, as a way of supporting their favorite works while getting something tangible in return and also since audio CDs (alongside vinyl records and cassette tapes) contain uncompressed audio without the artifacts introduced by lossy compression algorithms like MP3, and Blu-rays offer better image and sound quality than streaming media, without visible compression artifacts, due to higher bitrates and more available storage space. However, Blu-rays may sometimes be torrented over the internet, but torrenting may not be an option for some, due to restrictions put in place by ISPs on legal or copyright grounds, low download speeds or not having enough available storage space, since the content may weigh up to several dozen gigabytes. Blu-rays may be the only option for those looking to play large games without having to download them over an unreliable or slow internet connection, which is the reason why they are still (as of 2020) widely used by gaming consoles, like the PlayStation 4 and Xbox One X. As of 2020, it is unusual for PC games to be available in a physical format like Blu-ray. Optical discs are typically stored in special cases, sometimes called jewel cases. Discs should not have any stickers and should not be stored together with paper; papers must be removed from the jewel case before storage. Discs should be handled by the edges to prevent scratching, with the thumb on the inner edge of the disc. The ISO Standard 18938:2014 is about best optical disc handling techniques. Optical disc cleaning should never be done in a circular pattern, to avoid concentric cirles from forming on the disc. Improper cleaning can scratch the disc. Recordable discs should not be exposed to light for extended periods of time. Optical discs should be stored in dry and cool conditions to increase longevity, with temperatures between -10 and 23 °C, never exceeding 32 °C, and with humidity never falling below 10%, with recommended storage at 20 to 50% of humidity without fluctuations of more than ±10%. Durability Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage, if handled improperly. Optical discs are not prone to uncontrollable catastrophic failures such as head crashes, power surges, or exposure to water like hard disk drives and flash storage, since optical drives' storage controllers are not tied to optical discs themselves like with hard disk drives and flash memory controllers, and a disc is usually recoverable from a defective optical drive by pushing an unsharp needle into the emergency ejection pinhole, and has no point of immediate water ingress and no integrated circuitry. Security As the media itself only is accessed through a laser beam and has no internal control circuitry, it cannot contain malicious hardware in the same way as so-called rubber-duckies or USB killers. Like any data storage media, optical discs can contain malicious data, they are able to contain and spread malware - as happened in the case of the Sony BMG copy protection rootkit scandal in 2005 where Sony misused discs by pre-loading them with malware. Many types of optical discs are factory-pressed or finalized write once read many storage devices and would therefore not be effective at spreading computer worms that are designed to spread by copying themselves onto optical media, because data on those discs can not be modified once pressed or written. However, re-writable disc technologies (such as CD-RW) are able to spread this type of malware. History The first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light. Optophonie is a very early (1931) example of a recording device using light for both recording and playing back sound signals on a transparent photograph. An early analogue optical disc system existed in 1935, used on Welte's sampling organ. An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was a very early form of the DVD (). It is of special interest that , filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics. American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s. Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks, after this were developed four generations of optical drive that includes Laserdisc (1969), WORM (1979), Compact Discs (1984), DVD (1995), Blu-ray (2005), HD-DVD (2006), more formats are currently under development. First-generation From the start optical discs were used to store broadcast-quality analog video, and later digital media such as music or computer software. The LaserDisc format stored analog video signals for the distribution of home video, but commercially lost to the VHS videocassette format, due mainly to its high cost and non-re-recordability; other first-generation disc formats were designed only to store digital data and were not initially capable of use as a digital video medium. Most first-generation disc devices had an infrared laser reading head. The minimum size of the laser spot is proportional to the wavelength of the laser, so wavelength is a limiting factor upon the amount of information that can be stored in a given physical area on the disc. The infrared range is beyond the long-wavelength end of the visible light spectrum, so it supports less density than shorter-wavelength visible light. One example of high-density data storage capacity, achieved with an infrared laser, is 700 MB of net user data for a 12 cm compact disc. Other factors that affect data storage density include: the existence of multiple layers of data on the disc, the method of rotation (Constant linear velocity (CLV), Constant angular velocity (CAV), or zoned-CAV), the composition of lands and pits, and how much margin is unused is at the center and the edge of the disc. Types of Optical Discs: Compact disc (CD) and derivatives Audio CD Video CD (VCD) Super Video CD CD Video CD-Interactive LaserDisc GD-ROM Phase-change Dual Double Density Compact Disc (DDCD) Magneto-optical disc MiniDisc (MD) MD Data Write Once Read Many (WORM) Laserdisc In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam , filed 1972, issued 1991. Kramer's physical format is used in all optical discs. In 1975, Philips and MCA began to work together, and in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the discs and Philips the players. However, the presentation was a commercial failure, and the cooperation ended. In Japan and the U.S., Pioneer succeeded with the Laserdisc until the advent of the DVD. In 1979, Philips and Sony, in consortium, successfully developed the audio compact disc. WORM drive In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk. The recording system utilized blue light at 457 nm to record and red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation (STC) in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives, originally labeled the Laser Storage Drive 2000 (LSD-2000), was only moderately successful. Many of the disks were shipped to RCA Laboratories (now David Sarnoff Research Center) to be used in the Library of Congress archiving efforts. The STC disks utilized a sealed cartridge with an optical window for protection . CD-ROM The CD-ROM format was developed by Sony and Philips, introduced in 1984, as an extension of Compact Disc Digital Audio and adapted to hold any form of digital data. The same year, Sony demonstrated a LaserDisc data storage format, with a larger data capacity of 3.28 GB. In the late 1980s and early 1990s, Optex, Inc. of Rockville, MD, built an erasable optical digital video disc system using Electron Trapping Optical Media (ETOM). Although this technology was written up in Video Pro Magazine's December 1994 issue promising "the death of the tape", it was never marketed. Magnetic disks found limited applications in storing the data in large amount. So, there was the need of finding some more data storing techniques. As a result, it was found that by using optical means large data storing devices can be made that in turn gave rise to the optical discs. The very first application of this kind was the compact disc (CD), which was used in audio systems. Sony and Philips developed the first generation of the CDs in the mid-1980s with the complete specifications for these devices. With the help of this kind of technology the possibility of representing the analog signal into digital signal was exploited to a great level. For this purpose, the 16-bit samples of the analog signal were taken at the rate of 44,100 samples per second. This sample rate was based on the Nyquist rate of 40,000 samples per second required to capture the audible frequency range to 20 kHz without aliasing, with an additional tolerance to allow the use of less-than-perfect analog audio pre-filters to remove any higher frequencies. The first version of the standard allowed up to 74 minutes of music or 650 MB of data storage. Second-generation Second-generation optical discs were for storing great amounts of data, including broadcast-quality digital video. Such discs usually are read with a visible-light laser (usually red); the shorter wavelength and greater numerical aperture allow a narrower light beam, permitting smaller pits and lands in the disc. In the DVD format, this allows 4.7 GB storage on a standard 12 cm, single-sided, single-layer disc; alternatively, smaller media, such as the DataPlay format, can have capacity comparable to that of the larger, standard compact 12 cm disc. DVD and derivatives DVD-Audio DualDisc Digital Video Express (DIVX) DVD-RAM DVD±R Nintendo GameCube Game Disc (miniDVD derivative) Wii Optical Disc (DVD derivative) Super Audio CD (SACD) Enhanced Versatile Disc DataPlay Hi-MD Universal Media Disc (UMD) Ultra Density Optical DVD-ROM In 1995, a consortium of manufacturers (Sony, Philips, Toshiba, Panasonic) developed the second generation of the optical disc, the DVD. The DVD disc appeared after the CD-ROM had become widespread in society. Third-generation Third-generation optical discs are used for distributing high-definition video and videogames and support greater data storage capacities, accomplished with short-wavelength visible-light lasers and greater numerical apertures. Blu-ray Disc and HD DVD uses blue-violet lasers and focusing optics of greater aperture, for use with discs with smaller pits and lands, thereby greater data storage capacity per layer. In practice, the effective multimedia presentation capacity is improved with enhanced video data compression codecs such as H.264/MPEG-4 AVC and VC-1. Blu-ray and derivatives (up to 400 GB - experimental) BD-R and BD-RE High Fidelity Pure Audio AVCHD and AVCREC BDXL and Blu-ray 3D 4K Blu-ray and 8K Blu-ray Wii U Optical Disc (25 GB per layer) HD DVD (discontinued disc format, up to 51 GB triple layer) CBHD (a derivative of the HD DVD format) HD VMD Professional Disc Announced but not released: Digital Multilayer Disk Fluorescent Multilayer Disc Forward Versatile Disc Blu-ray and HD-DVD The third generation optical disc was developed in 2000–2006 and was introduced as Blu-ray Disc. First movies on Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a high definition optical disc format war over a competing format, the HD DVD. A standard Blu-ray disc can hold about 25 GB of data, a DVD about 4.7 GB, and a CD about 700 MB. Fourth-generation The following formats go beyond the current third-generation discs and have the potential to hold more than one terabyte (1 TB) of data and at least some are meant for cold data storage in data centers: Archival Disc Holographic Versatile Disc Announced but not released: LS-R Protein-coated disc Stacked Volumetric Optical Disc 5D DVD 3D optical data storage (not a single technology, examples are Hyper CD-ROM and Fluorescent Multilayer Disc) In 2004, development of the Holographic Versatile Disc (HVD) commenced, which promised the storage of several terabytes of data per disc. However, development stagnated towards the late 2000s due to lack of funding. In 2006, it was reported that Japanese researchers developed ultraviolet ray lasers with a wavelength of 210 nanometers, which would enable a higher bit density than Blu-ray discs. As of 2022, no updates on that project have been reported. Folio Photonics is planning to release high-capacity discs in 2024 with the cost of $5 per TB, with a roadmap to $1 per TB, using 80% less power than HDD. Overview of optical types
Technology
Data storage
null
41460
https://en.wikipedia.org/wiki/Optical%20isolator
Optical isolator
An optical isolator, or optical diode, is an optical component which allows the transmission of light in only one direction. It is typically used to prevent unwanted feedback into an optical oscillator, such as a laser cavity. The operation of conventional optical isolators relies on the Faraday effect (which in turn is produced by magneto-optic effect), which is used in the main component, the Faraday rotator. However, integrated isolators which do not rely on magnetism have been made in recent years too. Theory The main component of the optical isolator is the Faraday rotator. The magnetic field, , applied to the Faraday rotator causes a rotation in the polarization of the light due to the Faraday effect. The angle of rotation, , is given by, , where, is the Verdet constant of the material (amorphous or crystalline solid, or liquid, or crystalline liquid, or vaprous, or gaseous) of which the rotator is made, and is the length of the rotator. This is shown in Figure 2. Specifically for an optical isolator, the values are chosen to give a rotation of 45°. It has been shown that a crucial requirement for any kind of optical isolator (not only the Faraday isolator) is some kind of non-reciprocal optics Polarization dependent isolator The polarization dependent isolator, or Faraday isolator, is made of three parts, an input polarizer (polarized vertically), a Faraday rotator, and an output polarizer, called an analyzer (polarized at 45°). Light traveling in the forward direction becomes polarized vertically by the input polarizer. The Faraday rotator will rotate the polarization by 45°. The analyzer then enables the light to be transmitted through the isolator. Light traveling in the backward direction becomes polarized at 45° by the analyzer. The Faraday rotator will again rotate the polarization by 45°. This means the light is polarized horizontally (the direction of rotation is not sensitive to the direction of propagation). Since the polarizer is vertically aligned, the light will be extinguished. Figure 2 shows a Faraday rotator with an input polarizer, and an output analyzer. For a polarization dependent isolator, the angle between the polarizer and the analyzer, , is set to 45°. The Faraday rotator is chosen to give a 45° rotation. Polarization dependent isolators are typically used in free space optical systems. This is because the polarization of the source is typically maintained by the system. In optical fibre systems, the polarization direction is typically dispersed in non polarization maintaining systems. Hence the angle of polarization will lead to a loss. Polarization independent isolator The polarization independent isolator is made of three parts, an input birefringent wedge (with its ordinary polarization direction vertical and its extraordinary polarization direction horizontal), a Faraday rotator, and an output birefringent wedge (with its ordinary polarization direction at 45°, and its extraordinary polarization direction at −45°). Light traveling in the forward direction is split by the input birefringent wedge into its vertical (0°) and horizontal (90°) components, called the ordinary ray (o-ray) and the extraordinary ray (e-ray) respectively. The Faraday rotator rotates both the o-ray and e-ray by 45°. This means the o-ray is now at 45°, and the e-ray is at −45°. The output birefringent wedge then recombines the two components. Light traveling in the backward direction is separated into the o-ray at 45, and the e-ray at −45° by the birefringent wedge. The Faraday Rotator again rotates both the rays by 45°. Now the o-ray is at 90°, and the e-ray is at 0°. Instead of being focused by the second birefringent wedge, the rays diverge. Typically collimators are used on either side of the isolator. In the transmitted direction the beam is split and then combined and focused into the output collimator. In the isolated direction the beam is split, and then diverged, so it does not focus at the collimator. Figure 3 shows the propagation of light through a polarization independent isolator. The forward travelling light is shown in blue, and the backward propagating light is shown in red. The rays were traced using an ordinary refractive index of 2, and an extraordinary refractive index of 3. The wedge angle is 7°. The Faraday rotator The most important optical element in an isolator is the Faraday rotator. The characteristics that one looks for in a Faraday rotator optic include a high Verdet constant, low absorption coefficient, low non-linear refractive index and high damage threshold. Also, to prevent self-focusing and other thermal related effects, the optic should be as short as possible. The two most commonly used materials for the 700–1100 nm range are terbium doped borosilicate glass and terbium gallium garnet crystal (TGG). For long distance fibre communication, typically at 1310 nm or 1550 nm, yttrium iron garnet crystals are used (YIG). Commercial YIG based Faraday isolators reach isolations higher than 30 dB. Optical isolators are different from 1/4 wave plate based isolators because the Faraday rotator provides non-reciprocal rotation while maintaining linear polarization. That is, the polarization rotation due to the Faraday rotator is always in the same relative direction. So in the forward direction, the rotation is positive 45°. In the reverse direction, the rotation is −45°. This is due to the change in the relative magnetic field direction, positive one way, negative the other. This then adds to a total of 90° when the light travels in the forward direction and then the negative direction. This allows the higher isolation to be achieved. Optical isolators and thermodynamics It might seem at first glance that a device that allows light to flow in only one direction would violate Kirchhoff's law and the second law of thermodynamics, by allowing light energy to flow from a cold object to a hot object and blocking it in the other direction, but the violation is avoided because the isolator must absorb (not reflect) the light from the hot object and will eventually reradiate it to the cold one. Attempts to re-route the photons back to their source unavoidably involve creating a route by which other photons can travel from the hot body to the cold one, avoiding the paradox.
Technology
Optical components
null
41461
https://en.wikipedia.org/wiki/Optical%20path%20length
Optical path length
In optics, optical path length (OPL, denoted Λ in equations), also known as optical length or optical distance, is the length that light needs to travel through a vacuum to create the same phase difference as it would have when traveling through a given medium. It is calculated by taking the product of the geometric length of the optical path followed by light and the refractive index of the homogeneous medium through which the light ray propagates; for inhomogeneous optical media, the product above is generalized as a path integral as part of the ray tracing procedure. A difference in OPL between two paths is often called the optical path difference (OPD). OPL and OPD are important because they determine the phase of the light and govern interference and diffraction of light as it propagates. In a medium of constant refractive index, n, the OPL for a path of geometrical length s is just If the refractive index varies along the path, the OPL is given by a line integral where n is the local refractive index as a function of distance along the path C. An electromagnetic wave propagating along a path C has the phase shift over C as if it was propagating a path in a vacuum, length of which, is equal to the optical path length of C. Thus, if a wave is traveling through several different media, then the optical path length of each medium can be added to find the total optical path length. The optical path difference between the paths taken by two identical waves can then be used to find the phase change. Finally, using the phase change, the interference between the two waves can be calculated. Fermat's principle states that the path light takes between two points is the path that has the minimum optical path length. Optical path difference The OPD corresponds to the phase shift undergone by the light emitted from two previously coherent sources when passed through mediums of different refractive indices. For example, a wave passing through air appears to travel a shorter distance than an identical wave traveling the same distance in glass. This is because a larger number of wavelengths fit in the same distance due to the higher refractive index of the glass. The OPD can be calculated from the following equation: where d1 and d2 are the distances of the ray passing through medium 1 or 2, n1 is the greater refractive index (e.g., glass) and n2 is the smaller refractive index (e.g., air).
Physical sciences
Optics
Physics
41464
https://en.wikipedia.org/wiki/Visible%20spectrum
Visible spectrum
The visible spectrum is the band of the electromagnetic spectrum that is visible to the human eye. Electromagnetic radiation in this range of wavelengths is called visible light (or simply light). The optical spectrum is sometimes considered to be the same as the visible spectrum, but some authors define the term more broadly, to include the ultraviolet and infrared parts of the electromagnetic spectrum as well, known collectively as optical radiation. A typical human eye will respond to wavelengths from about 380 to about 750 nanometers. In terms of frequency, this corresponds to a band in the vicinity of 400–790 terahertz. These boundaries are not sharply defined and may vary per individual. Under optimal conditions, these limits of human perception can extend to 310 nm (ultraviolet) and 1100 nm (near infrared). The spectrum does not contain all the colors that the human visual system can distinguish. Unsaturated colors such as pink, or purple variations like magenta, for example, are absent because they can only be made from a mix of multiple wavelengths. Colors containing only one wavelength are also called pure colors or spectral colors. Visible wavelengths pass largely unattenuated through the Earth's atmosphere via the "optical window" region of the electromagnetic spectrum. An example of this phenomenon is when clean air scatters blue light more than red light, and so the midday sky appears blue (apart from the area around the Sun which appears white because the light is not scattered as much). The optical window is also referred to as the "visible window" because it overlaps the human visible response spectrum. The near infrared (NIR) window lies just out of the human vision, as well as the medium wavelength infrared (MWIR) window, and the long-wavelength or far-infrared (LWIR or FIR) window, although other animals may perceive them. Spectral colors Colors that can be produced by visible light of a narrow band of wavelengths (monochromatic light) are called pure spectral colors. The various color ranges indicated in the illustration are an approximation: The spectrum is continuous, with no clear boundaries between one color and the next. History In the 13th century, Roger Bacon theorized that rainbows were produced by a similar process to the passage of light through glass or crystal. In the 17th century, Isaac Newton discovered that prisms could disassemble and reassemble white light, and described the phenomenon in his book Opticks. He was the first to use the word spectrum (Latin for "appearance" or "apparition") in this sense in print in 1671 in describing his experiments in optics. Newton observed that, when a narrow beam of sunlight strikes the face of a glass prism at an angle, some is reflected and some of the beam passes into and through the glass, emerging as different-colored bands. Newton hypothesized light to be made up of "corpuscles" (particles) of different colors, with the different colors of light moving at different speeds in transparent matter, red light moving more quickly than violet in glass. The result is that red light is bent (refracted) less sharply than violet as it passes through the prism, creating a spectrum of colors. Newton originally divided the spectrum into six named colors: red, orange, yellow, green, blue, and violet. He later added indigo as the seventh color since he believed that seven was a perfect number as derived from the ancient Greek sophists, of there being a connection between the colors, the musical notes, the known objects in the Solar System, and the days of the week. The human eye is relatively insensitive to indigo's frequencies, and some people who have otherwise-good vision cannot distinguish indigo from blue and violet. For this reason, some later commentators, including Isaac Asimov, have suggested that indigo should not be regarded as a color in its own right but merely as a shade of blue or violet. Evidence indicates that what Newton meant by "indigo" and "blue" does not correspond to the modern meanings of those color words. Comparing Newton's observation of prismatic colors with a color image of the visible light spectrum shows that "indigo" corresponds to what is today called blue, whereas his "blue" corresponds to cyan. In the 18th century, Johann Wolfgang von Goethe wrote about optical spectra in his Theory of Colours. Goethe used the word spectrum (Spektrum) to designate a ghostly optical afterimage, as did Schopenhauer in On Vision and Colors. Goethe argued that the continuous spectrum was a compound phenomenon. Where Newton narrowed the beam of light to isolate the phenomenon, Goethe observed that a wider aperture produces not a spectrum but rather reddish-yellow and blue-cyan edges with white between them. The spectrum appears only when these edges are close enough to overlap. In the early 19th century, the concept of the visible spectrum became more definite, as light outside the visible range was discovered and characterized by William Herschel (infrared) and Johann Wilhelm Ritter (ultraviolet), Thomas Young, Thomas Johann Seebeck, and others. Young was the first to measure the wavelengths of different colors of light, in 1802. The connection between the visible spectrum and color vision was explored by Thomas Young and Hermann von Helmholtz in the early 19th century. Their theory of color vision correctly proposed that the eye uses three distinct receptors to perceive color. Limits to visible range The visible spectrum is limited to wavelengths that can both reach the retina and trigger visual phototransduction (excite a visual opsin). Insensitivity to UV light is generally limited by transmission through the lens. Insensitivity to IR light is limited by the spectral sensitivity functions of the visual opsins. The range is defined psychometrically by the luminous efficiency function, which accounts for all of these factors. In humans, there is a separate function for each of two visual systems, one for photopic vision, used in daylight, which is mediated by cone cells, and one for scotopic vision, used in dim light, which is mediated by rod cells. Each of these functions have different visible ranges. However, discussion on the visible range generally assumes photopic vision. Atmospheric transmission The visible range of most animals evolved to match the optical window, which is the range of light that can pass through the atmosphere. The ozone layer absorbs almost all UV light (below 315 nm). However, this only affects cosmic light (e.g. sunlight), not terrestrial light (e.g. Bioluminescence). Ocular transmission Before reaching the retina, light must first transmit through the cornea and lens. UVB light (< 315 nm) is filtered mostly by the cornea, and UVA light (315–400 nm) is filtered mostly by the lens. The lens also yellows with age, attenuating transmission most strongly at the blue part of the spectrum. This can cause xanthopsia as well as a slight truncation of the short-wave (blue) limit of the visible spectrum. Subjects with aphakia are missing a lens, so UVA light can reach the retina and excite the visual opsins; this expands the visible range and may also lead to cyanopsia. Opsin absorption Each opsin has a spectral sensitivity function, which defines how likely it is to absorb a photon of each wavelength. The luminous efficiency function is approximately the superposition of the contributing visual opsins. Variance in the position of the individual opsin spectral sensitivity functions therefore affects the luminous efficiency function and the visible range. For example, the long-wave (red) limit changes proportionally to the position of the L-opsin. The positions are defined by the peak wavelength (wavelength of highest sensitivity), so as the L-opsin peak wavelength blue shifts by 10 nm, the long-wave limit of the visible spectrum also shifts 10 nm. Large deviations of the L-opsin peak wavelength lead to a form of color blindness called protanomaly and a missing L-opsin (protanopia) shortens the visible spectrum by about 30 nm at the long-wave limit. Forms of color blindness affecting the M-opsin and S-opsin do not significantly affect the luminous efficiency function nor the limits of the visible spectrum. Different definitions Regardless of actual physical and biological variance, the definition of the limits is not standard and will change depending on the industry. For example, some industries may be concerned with practical limits, so would conservatively report 420–680 nm, while others may be concerned with psychometrics and achieving the broadest spectrum would liberally report 380–750, or even 380–800 nm. The luminous efficiency function in the NIR does not have a hard cutoff, but rather an exponential decay, such that the function's value (or vision sensitivity) at 1,050 nm is about 109 times weaker than at 700 nm; much higher intensity is therefore required to perceive 1,050 nm light than 700 nm light. Vision outside the visible spectrum Under ideal laboratory conditions, subjects may perceive infrared light up to at least 1,064 nm. While 1,050 nm NIR light can evoke red, suggesting direct absorption by the L-opsin, there are also reports that pulsed NIR lasers can evoke green, which suggests two-photon absorption may be enabling extended NIR sensitivity. Similarly, young subjects may perceive ultraviolet wavelengths down to about 310–313 nm, but detection of light below 380 nm may be due to fluorescence of the ocular media, rather than direct absorption of UV light by the opsins. As UVA light is absorbed by the ocular media (lens and cornea), it may fluoresce and be released at a lower energy (longer wavelength) that can then be absorbed by the opsins. For example, when the lens absorbs 350 nm light, the fluorescence emission spectrum is centered on 440 nm. Non-visual light detection In addition to the photopic and scotopic systems, humans have other systems for detecting light that do not contribute to the primary visual system. For example, melanopsin has an absorption range of 420–540 nm and regulates circadian rhythm and other reflexive processes. Since the melanopsin system does not form images, it is not strictly considered vision and does not contribute to the visible range. In non-humans The visible spectrum is defined as that visible to humans, but the variance between species is large. Not only can cone opsins be spectrally shifted to alter the visible range, but vertebrates with 4 cones (tetrachromatic) or 2 cones (dichromatic) relative to humans' 3 (trichromatic) will also tend to have a wider or narrower visible spectrum than humans, respectively. Vertebrates tend to have 1-4 different opsin classes: longwave sensitive (LWS) with peak sensitivity between 500–570 nm, middlewave sensitive (MWS) with peak sensitivity between 480–520 nm, shortwave sensitive (SWS) with peak sensitivity between 415–470 nm, and violet/ultraviolet sensitive (VS/UVS) with peak sensitivity between 355–435 nm. Testing the visual systems of animals behaviorally is difficult, so the visible range of animals is usually estimated by comparing the peak wavelengths of opsins with those of typical humans (S-opsin at 420 nm and L-opsin at 560 nm). Mammals Most mammals have retained only two opsin classes (LWS and VS), due likely to the nocturnal bottleneck. However, old world primates (including humans) have since evolved two versions in the LWS class to regain trichromacy. Unlike most mammals, rodents' UVS opsins have remained at shorter wavelengths. Along with their lack of UV filters in the lens, mice have a UVS opsin that can detect down to 340 nm. While allowing UV light to reach the retina can lead to retinal damage, the short lifespan of mice compared with other mammals may minimize this disadvantage relative to the advantage of UV vision. Dogs have two cone opsins at 429 nm and 555 nm, so see almost the entire visible spectrum of humans, despite being dichromatic. Horses have two cone opsins at 428 nm and 539 nm, yielding a slightly more truncated red vision. Birds Most other vertebrates (birds, lizards, fish, etc.) have retained their tetrachromacy, including UVS opsins that extend further into the ultraviolet than humans' VS opsin. The sensitivity of avian UVS opsins vary greatly, from 355–425 nm, and LWS opsins from 560–570 nm. This translates to some birds with a visible spectrum on par with humans, and other birds with greatly expanded sensitivity to UV light. The LWS opsin of birds is sometimes reported to have a peak wavelength above 600 nm, but this is an effective peak wavelength that incorporates the filter of avian oil droplets. The peak wavelength of the LWS opsin alone is the better predictor of the long-wave limit. A possible benefit of avian UV vision involves sex-dependent markings on their plumage that are visible only in the ultraviolet range. Fish Teleosts (bony fish) are generally tetrachromatic. The sensitivity of fish UVS opsins vary from 347-383 nm, and LWS opsins from 500-570 nm. However, some fish that use alternative chromophores can extend their LWS opsin sensitivity to 625 nm. The popular belief that the common goldfish is the only animal that can see both infrared and ultraviolet light is incorrect, because goldfish cannot see infrared light. Invertebrates The visual systems of invertebrates deviate greatly from vertebrates, so direct comparisons are difficult. However, UV sensitivity has been reported in most insect species. Bees and many other insects can detect ultraviolet light, which helps them find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to their appearance in ultraviolet light rather than how colorful they appear to humans. Bees' long-wave limit is at about 590 nm. Mantis shrimp exhibit up to 14 opsins, enabling a visible range of less than 300 nm to above 700 nm. Thermal vision Some snakes can "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes, and other snakes with the organ may detect warm bodies from a meter away. It may also be used in thermoregulation and predator detection. Spectroscopy Spectroscopy is the study of objects based on the spectrum of color they emit, absorb or reflect. Visible-light spectroscopy is an important tool in astronomy (as is spectroscopy at other wavelengths), where scientists use it to analyze the properties of distant objects. Chemical elements and small molecules can be detected in astronomical objects by observing emission lines and absorption lines. For example, helium was first detected by analysis of the spectrum of the Sun. The shift in frequency of spectral lines is used to measure the Doppler shift (redshift or blueshift) of distant objects to determine their velocities towards or away from the observer. Astronomical spectroscopy uses high-dispersion diffraction gratings to observe spectra at very high spectral resolutions.
Physical sciences
Electrodynamics
null
41515
https://en.wikipedia.org/wiki/Cod
Cod
Cod (: cod) is the common name for the demersal fish genus Gadus, belonging to the family Gadidae. Cod is also used as part of the common name for a number of other fish species, and one species that belongs to genus Gadus is commonly not called cod (Alaska pollock, Gadus chalcogrammus). The two most common species of cod are the Atlantic cod (Gadus morhua), which lives in the colder waters and deeper sea regions throughout the North Atlantic, and the Pacific cod (Gadus macrocephalus), which is found in both eastern and western regions of the northern Pacific. Gadus morhua was named by Linnaeus in 1758. (However, G. morhua callarias, a low-salinity, nonmigratory race restricted to parts of the Baltic, was originally described as Gadus callarias by Linnaeus.) Cod as food is popular in several parts of the world. It has a mild flavour and a dense, flaky, white flesh. Cod livers are processed to make cod liver oil, a common source of vitamin A, vitamin D, vitamin E, and omega-3 fatty acids (EPA and DHA). Young Atlantic cod or haddock prepared in strips for cooking is called scrod. In the United Kingdom, Atlantic cod is one of the most common ingredients in fish and chips, along with haddock and plaice. Species At various times in the past, taxonomists included many species in the genus Gadus. Most of these are now either classified in other genera, or have been recognized as forms of one of three species. All these species have a number of common names, most of them ending with the word "cod", whereas other species, as closely related, have other common names (such as pollock and haddock). However, many other, unrelated species also have common names ending with cod. The usage often changes with different localities and at different times. Cod in the genus Gadus Three species in the genus Gadus are currently called cod: The fourth species of genus Gadus, Gadus chalcogrammus, is commonly called Alaska pollock or walleye pollock. But there are also less widespread alternative trade names highlighting the fish's belonging to the cod genus, like snow cod or bigeye cod. Related species Cod forms part of the common name of many other fish no longer classified in the genus Gadus. Many are members of the family Gadidae; others are members of three related families within the order Gadiformes whose names include the word "cod": the morid cods, Moridae (100 or so species); the eel cods, Muraenolepididae (four species); and the Eucla cod, Euclichthyidae (one species). The tadpole cod family (Ranicipitidae) has now been placed in Gadidae. Some fish have common names derived from "cod", such as codling, codlet, or tomcod. ("Codling" is also used as a name for a young cod.) Other species Some fish commonly known as cod are unrelated to Gadus. Part of this name confusion is market-driven. Severely shrunken Atlantic cod stocks have led to the marketing of cod replacements using culinary names of the form "x cod", according to culinary rather than phyletic similarity. The common names for the following species have become well established; note that all inhabit the Southern Hemisphere. Perciformes Fish of the order Perciformes that are commonly called "cod" include: Blue cod Parapercis colias Eastern freshwater cod Maccullochella ikei Mary River cod Maccullochella mariensis Murray cod Maccullochella peelii Potato cod Epinephelus tukula Sleepy cod Oxyeleotris lineolatus Trout cod Maccullochella macquariensis The notothen family, Nototheniidae, including: Antarctic cod Dissostichus mawsoni Black cod Notothenia microlepidota Maori cod Paranotothenia magellanica Rock cod, reef cod, and coral cod Almost all coral cod, reef cod or rock cod are also in order Perciformes. Most are better known as groupers, and belong to the family Serranidae. Others belong to the Nototheniidae. Two exceptions are the Australasian red rock cod, which belongs to a different order (see below), and the fish known simply as the rock cod and as soft cod in New Zealand, Lotella rhacina, which as noted above actually is related to the true cod (it is a morid cod). Scorpaeniformes From the order Scorpaeniformes: Ling cod Ophiodon elongatus Red rock cod Scorpaena papillosa Rock cod Sebastes Ophidiiformes The tadpole cod family, Ranicipitidae, and the Eucla cod family, Euclichthyidae, were formerly classified in the order Ophidiiformes, but are now grouped with the Gadiformes. Marketed as cod Some fish that do not have "cod" in their names are sometimes sold as cod. Haddock and whiting belong to the same family, the Gadidae, as cod. Haddock Melanogrammus aeglefinus Whiting Merlangius merlangus Patagonian toothfish or Chilean seabass Characteristics Cods of the genus Gadus have three rounded dorsal and two anal fins. The pelvic fins are small, with the first ray extended, and are set under the gill cover (i.e. the throat region), in front of the pectoral fins. The upper jaw extends over the lower jaw, which has a well-developed chin barbel. The eyes are medium-sized, approximately the same as the length of the chin barbel. Cod have a distinct white lateral line running from the gill slit above the pectoral fin, to the base of the caudal or tail fin. The back tends to be a greenish to sandy brown, and shows extensive mottling, especially towards the lighter sides and white belly. Dark brown colouration of the back and sides is not uncommon, especially for individuals that have resided in rocky inshore regions. The Atlantic cod can change colour at certain water depths. It has two distinct colour phases: gray-green and reddish brown. Its average weight is , but specimens weighing up to have been recorded. Pacific cod are smaller than Atlantic cod and are darker in colour. Distribution Atlantic cod (Gadus morhua) live in the colder waters and deeper sea regions throughout the North Atlantic. Pacific cod (Gadus macrocephalus) is found in both eastern and western regions of the Pacific. Atlantic cod could be further divided into several stocks, including the Arcto-Norwegian, North Sea, Baltic Sea, Faroe, Iceland, East Greenland, West Greenland, Newfoundland, and Labrador stocks. There seems to be little interchange between the stocks, although migrations to their individual breeding grounds may involve distances of or more. For instance, eastern Baltic cod shows specific reproductive adaptations to low salinity compared to Western Baltic and Atlantic cod. Atlantic cod occupy varied habitats, favouring rough ground, especially inshore, and are demersal in depths between , on average, although not uncommonly to depths of . Off the Norwegian and New England coasts and on the Grand Banks of Newfoundland, cod congregate at certain seasons in water of depth. Cod are gregarious and form schools, although shoaling tends to be a feature of the spawning season. Life cycle Spawning of northeastern Atlantic cod occurs between January and April (March and April are the peak months), at a depth of in specific spawning grounds at water temperatures between . Around the UK, the major spawning grounds are in the middle to southern North Sea, the start of the Bristol Channel (north of Newquay), the Irish Channel (both east and west of the Isle of Man), around Stornoway, and east of Helmsdale. Prespawning courtship involves fin displays and male grunting, which leads to pairing. The male inverts himself beneath the female, and the pair swim in circles while spawning. The eggs are planktonic and hatch between eight and 23 days, with larva reaching in length. This planktonic phase lasts some ten weeks, enabling the young cod to increase its body weight by 40-fold, and growing to about . The young cod then move to the seabed and change their diet to small benthic crustaceans, such as isopods and small crabs. They increase in size to in the first six months, by the end of their first year, and to by the end of the second. Growth tends to be less at higher latitudes. Cod reach maturity at about at about 3 to 4 years of age. Changes in growth rate over decades of particular stocks have been reported, current eastern Baltic cod shows the lowest growth observed since 1955. Ecology Adult cod are active hunters, feeding on sand eels, whiting, haddock, small cod, squid, crabs, lobsters, mussels, worms, mackerel, and molluscs. In the Baltic Sea the most important prey species are herring and sprat. Many studies that analyze the stomach contents of these fish indicate that cod is the top predator, preying on the herring and sprat. Sprat form particularly high concentrations in the Bornholm Basin in the southern Baltic Sea. Although cod feed primarily on adult sprat, sprat tend to prey on the cod eggs and larvae. Cod and related species are plagued by parasites. For example, the cod worm, Lernaeocera branchialis, starts life as a copepod-like larva, a small free-swimming crustacean. The first host used by the larva is a flatfish or lumpsucker, which it captures with grasping hooks at the front of its body. It penetrates the fish with a thin filament, which it uses to suck the fish's blood. The nourished larvae then mate on the fish. The female larva, with her now fertilized eggs, then finds a cod, or a cod-like fish such as a haddock or whiting. There the larva clings to the gills while it metamorphoses into a plump sinusoidal wormlike body with a coiled mass of egg strings at the rear. The front part of the worm's body penetrates the body of the cod until it enters the rear bulb of the host's heart. There, firmly rooted in the cod's circulatory system, the front part of the parasite develops like the branches of a tree, reaching into the main artery. In this way, the worm extracts nutrients from the cod's blood, remaining safely tucked beneath the cod's gill cover until it releases a new generation of offspring into the water. Fisheries The 2006 northwest Atlantic cod quota is 23,000 tons, representing half the available stocks, while the northeast Atlantic quota is 473,000 tons. Pacific cod is currently enjoying strong global demand. The 2006 total allowable catch (TAC) for the Gulf of Alaska and Aleutian Islands was 260,000 tons. Aquaculture Farming of Atlantic cod has received a significant amount of interest due to the overall trend of increasing cod prices alongside reduced wild catches. However, progress in creating large scale farming of cod has been slow, mainly due to bottlenecks in the larval production stage, where survival and growth are often unpredictable. It has been suggested that this bottleneck may be overcome by ensuring cod larvae are fed diets with similar nutritional content as the copepods they feed on in the wild Recent examples have shown that increasing dietary levels of minerals such as selenium, iodine and zinc may improve survival and/or biomarkers for health in aquaculture reared cod larvae. As food Cod is popular as a food with a mild flavour and a dense, flaky white flesh. Cod livers are processed to make cod liver oil, an important source of vitamin A, vitamin D, vitamin E and omega-3 fatty acids (EPA and DHA). Young Atlantic cod or haddock prepared in strips for cooking is called scrod. In the United Kingdom, Atlantic cod is one of the most common ingredients in fish and chips, along with haddock and plaice. Cod's soft liver can be tinned (canned) and eaten. History Cod has been an important economic commodity in international markets since the Viking period (around 800 AD). Norwegians travelled with dried cod and soon a dried cod market developed in southern Europe. This market has lasted for more than 1,000 years, enduring the Black Death, wars and other crises, and is still an important Norwegian fish trade. The Portuguese began fishing cod in the 15th century. Clipfish is widely enjoyed in Portugal. The Basques played an important role in the cod trade, and allegedly found the Canadian fishing banks before Columbus' discovery of America. The North American east coast developed in part due to the vast cod stocks. Many cities in the New England area are located near cod fishing grounds. The fish was so important to the history and development of Massachusetts, the state's House of Representatives hung a wood carving of a codfish, known as the Sacred Cod of Massachusetts, in its chambers. Apart from the long history, cod differ from most fish because the fishing grounds are far from population centres. The large cod fisheries along the coast of North Norway (and in particular close to the Lofoten islands) have been developed almost uniquely for export, depending on sea transport of stockfish over large distances. Since the introduction of salt, dried and salted cod (clipfish or 'klippfisk' in Norwegian) has also been exported. By the end of the 14th century, the Hanseatic League dominated trade operations and sea transport, with Bergen as the most important port. William Pitt the Elder, criticizing the Treaty of Paris in Parliament, claimed cod was "British gold"; and that it was folly to restore Newfoundland fishing rights to the French. In the 17th and 18th centuries in the New World, especially in Massachusetts and Newfoundland, cod became a major commodity, creating trade networks and cross-cultural exchanges. In 1733, Britain tried to gain control over trade between New England and the British Caribbean by imposing the Molasses Act, which they believed would eliminate the trade by making it unprofitable. The cod trade grew instead, because the "French were eager to work with the New Englanders in a lucrative contraband arrangement". In addition to increasing trade, the New England settlers organized into a "codfish aristocracy". The colonists rose up against Britain's "tariff on an import". In the 20th century, Iceland re-emerged as a fishing power and entered the Cod Wars. In the late 20th and early 21st centuries, fishing off the European and American coasts severely depleted stocks and become a major political issue. The necessity of restricting catches to allow stocks to recover upset the fishing industry and politicians who are reluctant to hurt employment. Collapse of the Atlantic northwest cod fishery On July 2, 1992, the Honourable John Crosbie, Canadian Federal Minister of Fisheries and Oceans, declared a two-year moratorium on the Northern Cod fishery, a designated fishing region off the coast of Newfoundland, after data showed that the total cod biomass had suffered a collapse to less than 1% of its normal value. The minister championed the measure as a temporary solution, allowing the cod population time to recover. The fisheries had long shaped the lives and communities on Canada's Atlantic eastern coast for the preceding five centuries. Societies which are dependent on fishing have a strong mutual relationship with them: the act of fishing changes the ecosystems' balance, which forces the fishery and, in turn, the fishing societies to adapt to new ecological conditions. The near-complete destruction of the Atlantic northwest cod biomass off the shores devastated coastal communities, which had been overexploiting the same cod population for decades. The fishermen along the Atlantic northwest had employed modern fishing technologies, including the ecologically-devastating practice of trawling, especially in the years leading up to the 1990s, in the misguided belief that fishing stocks are perpetually plentiful and unable to be depleted. After this assumption was empirically and abruptly shown to be incorrect, to the dismay of government officials and rural workers, some 19,000 fishermen and cod processing plant workers in Newfoundland lost their employment. The powerful economic engine of rural Newfoundland coughed, wheezed, and died. Nearly 40,000 workers and harvesters in the provinces of Newfoundland and Labrador applied for the federal relief program TAGS (the Atlantic Groundfish Strategy). Abandoned and rusting fishing boats still litter the coasts of Newfoundland and the Canadian northwest to this day. The fishery minister, John Crosbie, after delivering a speech on the day before the declaration of the moratorium, or July 1, 1992, was publicly heckled and verbally harassed by disgruntled locals at a fishing village. The moratorium, initially lasting for only two years, was indefinitely extended after it became evident that cod populations had not recovered at all but, instead, had continued to spiral downward in both size and numbers, due to the damage caused by decades of horrible fishing practices, and the fact that the moratorium had permitted exceptions for food fisheries for "personal consumption" purposes to this very day. Some 12,000 tons of Northwest cod are still being caught every year along the Newfoundland coast by local fishermen. The collapse of the four-million ton biomass, which had persevered through several previous marine extinctions over tens of millions of years, in a timespan of no more than 20 years, is oft-cited by researchers as one of the most visible examples of the phenomenon of the "Tragedy of the Commons." Factors which had been implicated as contributing to the collapse include: overfishing; government mismanagement; the disregard of scientific uncertainty; warming habitat waters; declining reproduction; and plain human ignorance. The Northern Cod biomass has been recovering slowly since the imposition of the moratorium. However, as of 2021, the growth of the cod population has been stagnant since 2017, and some scientists argue that the population will not rebound unless the Fisheries Department of Canada lower its yearly quota to 5,000 tons.
Biology and health sciences
Acanthomorpha
null
41519
https://en.wikipedia.org/wiki/Photic%20zone
Photic zone
The photic zone (or euphotic zone, epipelagic zone, or sunlight zone) is the uppermost layer of a body of water that receives sunlight, allowing phytoplankton to perform photosynthesis. It undergoes a series of physical, chemical, and biological processes that supply nutrients into the upper water column. The photic zone is home to the majority of aquatic life due to the activity (primary production) of the phytoplankton. The thicknesses of the photic and euphotic zones vary with the intensity of sunlight as a function of season and latitude and with the degree of water turbidity. The bottommost, or aphotic, zone is the region of perpetual darkness that lies beneath the photic zone and includes most of the ocean waters. Photosynthesis in photic zone In the photic zone, the photosynthesis rate exceeds the respiration rate. This is due to the abundant solar energy which is used as an energy source for photosynthesis by primary producers such as phytoplankton. These phytoplankton grow extremely quickly because of sunlight's heavy influence, enabling it to be produced at a fast rate. In fact, ninety five percent of photosynthesis in the ocean occurs in the photic zone. Therefore, if we go deeper, beyond the photic zone, such as into the compensation point, there is little to no phytoplankton, because of insufficient sunlight. The zone which extends from the base of the euphotic zone to the aphotic zone is sometimes called the dysphotic zone. Life in the photic zone Ninety percent of marine life lives in the photic zone, which is approximately two hundred meters deep. This includes phytoplankton (plants), including dinoflagellates, diatoms, cyanobacteria, coccolithophores, and cryptomonads. It also includes zooplankton, the consumers in the photic zone. There are carnivorous meat eaters and herbivorous plant eaters. Next, copepods are the small crustaceans distributed everywhere in the photic zone. Finally, there are nekton (animals that can propel themselves, like fish, squids, and crabs), which are the largest and the most obvious animals in the photic zone, but their quantity is the smallest among all the groups. Phytoplankton are microscopic plants living suspended in the water column that have little or no means of motility. They are primary producers that use solar energy as a food source. Detritivores and scavengers are rare in the photic zone. Microbial decomposition of dead organisms begins here and continues once the bodies sink to the aphotic zone where they form the most important source of nutrients for deep sea organisms. The depth of the photic zone depends on the transparency of the water. If the water is very clear, the photic zone can become very deep. If it is very murky, it can be only fifty feet (fifteen meters) deep. Animals within the photic zone use the cycle of light and dark as an important environmental signal, migration is directly linked to this fact, fishes use the concept of dusk and dawn when its time to migrate, the photic zone resembles this concept providing a sense of time. These animals can be herrings and sardines and other fishes that consistently live within the photic zone. Nutrient uptake in the photic zone Due to biological uptake, the photic zone has relatively low levels of nutrient concentrations. As a result, phytoplankton doesn't receive enough nutrients when there is high water-column stability. The spatial distribution of organisms can be controlled by a number of factors. Physical factors include: temperature, hydrostatic pressure, turbulent mixing such as the upward turbulent flux of inorganic nitrogen across the nutricline. Chemical factors include oxygen and trace elements. Biological factors include grazing and migrations. Upwelling carries nutrients from the deep waters into the photic zone, strengthening phytoplankton growth. The remixing and upwelling eventually bring nutrient-rich wastes back into the photic zone. The Ekman transport additionally brings more nutrients to the photic zone. Nutrient pulse frequency affects the phytoplankton competition. Photosynthesis produces more of it. Being the first link in the food chain, what happens to phytoplankton creates a rippling effect for other species. Besides phytoplankton, many other animals also live in this zone and utilize these nutrients. The majority of ocean life occurs in the photic zone, the smallest ocean zone by water volume. The photic zone, although small, has a large impact on those who reside in it. Photic zone depth The depth is, by definition, where radiation is degraded down to 1% of its surface strength. Accordingly, its thickness depends on the extent of light attenuation in the water column. As incoming light at the surface can vary widely, this says little about the net growth of phytoplankton. Typical euphotic depths vary from only a few centimetres in highly turbid eutrophic lakes, to around 200 meters in the open ocean. It also varies with seasonal changes in turbidity, which can be strongly driven by phytoplankton concentrations, such that the depth of the photic zone often decreases as primary production increases. Moreover, the respiration rate is actually greater than the photosynthesis rate. The reason why phytoplankton production is so important is because it plays a prominent role when interwoven with other food webs. Light attenuation Most of the solar energy reaching the Earth is in the range of visible light, with wavelengths between about 400-700 nm. Each colour of visible light has a unique wavelength, and together they make up white light. The shortest wavelengths are on the violet and ultraviolet end of the spectrum, while the longest wavelengths are at the red and infrared end. In between, the colours of the visible spectrum comprise the familiar “ROYGBIV”; red, orange, yellow, green, blue, indigo, and violet. Water is very effective at absorbing incoming light, so the amount of light penetrating the ocean declines rapidly (is attenuated) with depth. At one metre depth only 45% of the solar energy that falls on the ocean surface remains. At 10 metres depth only 16% of the light is still present, and only 1% of the original light is left at 100 metres. No light penetrates beyond 1000 metres. In addition to overall attenuation, the oceans absorb the different wavelengths of light at different rates. The wavelengths at the extreme ends of the visible spectrum are attenuated faster than those wavelengths in the middle. Longer wavelengths are absorbed first; red is absorbed in the upper 10 metres, orange by about 40 metres, and yellow disappears before 100 metres. Shorter wavelengths penetrate further, with blue and green light reaching the deepest depths. This is why things appear blue underwater. How colours are perceived by the eye depends on the wavelengths of light that are received by the eye. An object appears red to the eye because it reflects red light and absorbs other colours. So the only colour reaching the eye is red. Blue is the only colour of light available at depth underwater, so it is the only colour that can be reflected back to the eye, and everything has a blue tinge under water. A red object at depth will not appear red to us because there is no red light available to reflect off of the object. Objects in water will only appear as their real colours near the surface where all wavelengths of light are still available, or if the other wavelengths of light are provided artificially, such as by illuminating the object with a dive light. Water in the open ocean appears clear and blue because it contains much less particulate matter, such as phytoplankton or other suspended particles, and the clearer the water, the deeper the light penetration. Blue light penetrates deeply and is scattered by the water molecules, while all other colours are absorbed; thus the water appears blue. On the other hand, coastal water often appears greenish. Coastal water contains much more suspended silt and algae and microscopic organisms than the open ocean. Many of these organisms, such as phytoplankton, absorb light in the blue and red range through their photosynthetic pigments, leaving green as the dominant wavelength of reflected light. Therefore the higher the phytoplankton concentration in water, the greener it appears. Small silt particles may also absorb blue light, further shifting the colour of water away from blue when there are high concentrations of suspended particles. The ocean can be divided into depth layers depending on the amount of light penetration, as discussed in pelagic zone. The upper 200 metres is referred to as the photic or euphotic zone. This represents the region where enough light can penetrate to support photosynthesis, and it corresponds to the epipelagic zone. From 200 to 1000 metres lies the dysphotic zone, or the twilight zone (corresponding with the mesopelagic zone). There is still some light at these depths, but not enough to support photosynthesis. Below 1000 metres is the aphotic (or midnight) zone, where no light penetrates. This region includes the majority of the ocean volume, which exists in complete darkness. Paleoclimatology Phytoplankton are unicellular microorganisms which form the base of the ocean food chains. They are dominated by diatoms, which grow silicate shells called frustules. When diatoms die their shells can settle on the seafloor and become microfossils. Over time, these microfossils become buried as opal deposits in the marine sediment. Paleoclimatology is the study of past climates. Proxy data is used in order to relate elements collected in modern-day sedimentary samples to climatic and oceanic conditions in the past. Paleoclimate proxies refer to preserved or fossilized physical markers which serve as substitutes for direct meteorological or ocean measurements. An example of proxies is the use of diatom isotope records of δ13C, δ18O, δ30Si (δ13Cdiatom, δ18Odiatom, and δ30Sidiatom). In 2015, Swann and Snelling used these isotope records to document historic changes in the photic zone conditions of the north-west Pacific Ocean, including nutrient supply and the efficiency of the soft-tissue biological pump, from the modern day back to marine isotope stage 5e, which coincides with the last interglacial period. Peaks in opal productivity in the marine isotope stage are associated with the breakdown of the regional halocline stratification and increased nutrient supply to the photic zone. The initial development of the halocline and stratified water column has been attributed to the onset of major Northern Hemisphere glaciation at 2.73 Ma, which increased the flux of freshwater to the region, via increased monsoonal rainfall and/or glacial meltwater, and sea surface temperatures. The decrease of abyssal water upwelling associated with this may have contributed to the establishment of globally cooler conditions and the expansion of glaciers across the Northern Hemisphere from 2.73 Ma. While the halocline appears to have prevailed through the late Pliocene and early Quaternary glacial–interglacial cycles, other studies have shown that the stratification boundary may have broken down in the late Quaternary at glacial terminations and during the early part of interglacials. Phytoplankton side notes. Phytoplankton are restricted to the photo zone only. As its growth is completely dependent upon photosynthesis. This results in the 50–100 m water level inside the ocean. Growth can also come from land factors, for example minerals that are dissolved from rocks, mineral nutrients from generations of plants and animals ,that made its way into the photic zone. An increase in the amount of phytoplankton also creates an increase in zooplankton, the zooplankton feeds on the phytoplankton as they are at the bottom of the food chain. Dimethylsulfide Dimethylsulfide loss within the photic zone is controlled by microbial uptake and photochemical degradation. But what exactly is dimethylsulfide and why is it important? This compound (see the photo) helps regulate sulfur cycle and ecology within the ocean. Marine bacteria, algae, coral and most other organisms within the ocean release this, constituting a range of gene families. However this compound can be toxic to humans if swallowed, absorbed through the skin and inhaled. Proteins within plants and animals depend on this compound. Making it a significant part of ecology, it's good to know that it lives in the photic zone as well.
Physical sciences
Oceanography
Earth science
41545
https://en.wikipedia.org/wiki/Avogadro%20constant
Avogadro constant
The Avogadro constant, commonly denoted or , is an SI defining constant with an exact value of (reciprocal moles). It is this defined number of constituent particles (usually molecules, atoms, ions, or ion pairs—in general, entities) per mole (SI unit) and used as a normalization factor in relating the amount of substance, n(X), in a sample of a substance X to the corresponding number of entities, N(X): n(X) = N(X)(1/N), an aggregate of N(X) reciprocal Avogadro constants. By setting N(X) = 1, a reciprocal Avogadro constant is seen to be equal to one entity, which means that n(X) is more easily interpreted as an aggregate of N(X) entities. In the SI dimensional analysis of measurement units, the dimension of the Avogadro constant is the reciprocal of amount of substance, denoted N−1. The Avogadro number, sometimes denoted , is the numeric value of the Avogadro constant (i.e., without a unit), namely the dimensionless number ; the value chosen based on the number of atoms in 12 grams of carbon-12 in alignment with the historical definition of a mole. The constant is named after the Italian physicist and chemist Amedeo Avogadro (1776–1856). The Avogadro constant is also the factor that converts the average mass () of one particle, in grams, to the molar mass () of the substance, in grams per mole (g/mol). That is, . The constant also relates the molar volume (the volume per mole) of a substance to the average volume nominally occupied by one of its particles, when both are expressed in the same units of volume. For example, since the molar volume of water in ordinary conditions is about , the volume occupied by one molecule of water is about , or about (cubic nanometres). For a crystalline substance, relates the volume of a crystal with one mole worth of repeating unit cells, to the volume of a single cell (both in the same units). Definition The Avogadro constant was historically derived from the old definition of the mole as the amount of substance in 12 grams of carbon-12 (12C); or, equivalently, the number of daltons in a gram, where the dalton is defined as of the mass of a 12C atom. By this old definition, the numerical value of the Avogadro constant in mol−1 (the Avogadro number) was a physical constant that had to be determined experimentally. The redefinition of the mole in 2019, as being the amount of substance containing exactly particles, meant that the mass of 1 mole of a substance is now exactly the product of the Avogadro number and the average mass of its particles. The dalton, however, is still defined as of the mass of a 12C atom, which must be determined experimentally and is known only with finite accuracy. The prior experiments that aimed to determine the Avogadro constant are now re-interpreted as measurements of the value in grams of the dalton. By the old definition of mole, the numerical value of one mole of a substance, expressed in grams, was precisely equal to the average mass of one particle in daltons. With the new definition, this numerical equivalence is no longer exact, as it is affected by the uncertainty of the value of the dalton in SI units. However, it is still applicable for all practical purposes. For example, the average mass of one molecule of water is about 18.0153 daltons, and of one mole of water is about 18.0153 grams. Also, the Avogadro number is the approximate number of nucleons (protons and neutrons) in one gram of ordinary matter. In older literature, the Avogadro number was also denoted , although that conflicts with the symbol for number of particles in statistical mechanics. History Origin of the concept The Avogadro constant is named after the Italian scientist Amedeo Avogadro (1776–1856), who, in 1811, first proposed that the volume of a gas (at a given pressure and temperature) is proportional to the number of atoms or molecules regardless of the nature of the gas. Avogadro's hypothesis was popularized four years after his death by Stanislao Cannizzaro, who advocated Avogadro's work at the Karlsruhe Congress in 1860. The name Avogadro's number was coined in 1909 by the physicist Jean Perrin, who defined it as the number of molecules in exactly 32 grams of oxygen gas. The goal of this definition was to make the mass of a mole of a substance, in grams, be numerically equal to the mass of one molecule relative to the mass of the hydrogen atom; which, because of the law of definite proportions, was the natural unit of atomic mass, and was assumed to be of the atomic mass of oxygen. First measurements The value of Avogadro's number (not yet known by that name) was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. This value, the number density of particles in an ideal gas, is now called the Loschmidt constant in his honor, and is related to the Avogadro constant, , by where is the pressure, is the gas constant, and is the absolute temperature. Because of this work, the symbol is sometimes used for the Avogadro constant, and, in German literature, that name may be used for both constants, distinguished only by the units of measurement. (However, should not be confused with the entirely different Loschmidt constant in English-language literature.) Perrin himself determined the Avogadro number by several different experimental methods. He was awarded the 1926 Nobel Prize in Physics, largely for this work. The electric charge per mole of electrons is a constant called the Faraday constant and has been known since 1834, when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan with the help of Harvey Fletcher obtained the first measurement of the charge on an electron. Dividing the charge on a mole of electrons by the charge on a single electron provided a more accurate estimate of the Avogadro number. SI definition of 1971 In 1971, in its 14th conference, the International Bureau of Weights and Measures (BIPM) decided to regard the amount of substance as an independent dimension of measurement, with the mole as its base unit in the International System of Units (SI). Specifically, the mole was defined as an amount of a substance that contains as many elementary entities as there are atoms in () of carbon-12 (12C). Thus, in particular, one mole of carbon-12 was exactly of the element. By this definition, one mole of any substance contained exactly as many elementary entities as one mole of any other substance. However, this number was a physical constant that had to be experimentally determined since it depended on the mass (in grams) of one atom of 12C, and therefore, it was known only to a limited number of decimal digits. The common rule of thumb that "one gram of matter contains nucleons" was exact for carbon-12, but slightly inexact for other elements and isotopes. In the same conference, the BIPM also named (the factor that converted moles into number of particles) the "Avogadro constant". However, the term "Avogadro number" continued to be used, especially in introductory works. As a consequence of this definition, was not a pure number, but had the metric dimension of reciprocal of amount of substance (mol−1). SI redefinition of 2019 In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant as the exact value , thus redefining the mole as exactly constituent particles of the substance under consideration. One consequence of this change is that the mass of a mole of 12C atoms is no longer exactly 0.012 kg. On the other hand, the dalton ( universal atomic mass unit) remains unchanged as of the mass of 12C. Thus, the molar mass constant remains very close to but no longer exactly equal to 1 g/mol, although the difference ( in relative terms, as of March 2019) is insignificant for all practical purposes. Connection to other constants The Avogadro constant is related to other physical constants and properties. It relates the molar gas constant and the Boltzmann constant , which in the SI is defined to be exactly :   It relates the Faraday constant and the elementary charge , which in the SI is defined as exactly :   It relates the molar mass constant and the atomic mass constant currently
Physical sciences
Substance
Chemistry
41559
https://en.wikipedia.org/wiki/Plane%20wave
Plane wave
In physics, a plane wave is a special case of a wave or field: a physical quantity whose value, at any given moment, is constant through any plane that is perpendicular to a fixed direction in space. For any position in space and any time , the value of such a field can be written as where is a unit-length vector, and is a function that gives the field's value as dependent on only two real parameters: the time , and the scalar-valued displacement of the point along the direction . The displacement is constant over each plane perpendicular to . The values of the field may be scalars, vectors, or any other physical or mathematical quantity. They can be complex numbers, as in a complex exponential plane wave. When the values of are vectors, the wave is said to be a longitudinal wave if the vectors are always collinear with the vector , and a transverse wave if they are always orthogonal (perpendicular) to it. Special types Traveling plane wave Often the term "plane wave" refers specifically to a traveling plane wave, whose evolution in time can be described as simple translation of the field at a constant wave speed along the direction perpendicular to the wavefronts. Such a field can be written as where is now a function of a single real parameter , that describes the "profile" of the wave, namely the value of the field at time , for each displacement . In that case, is called the direction of propagation. For each displacement , the moving plane perpendicular to at distance from the origin is called a "wavefront". This plane travels along the direction of propagation with velocity ; and the value of the field is then the same, and constant in time, at every one of its points. Sinusoidal plane wave The term is also used, even more specifically, to mean a "monochromatic" or sinusoidal plane wave: a travelling plane wave whose profile is a sinusoidal function. That is, The parameter , which may be a scalar or a vector, is called the amplitude of the wave; the scalar coefficient is its "spatial frequency"; and the scalar is its "phase shift". A true plane wave cannot physically exist, because it would have to fill all space. Nevertheless, the plane wave model is important and widely used in physics. The waves emitted by any source with finite extent into a large homogeneous region of space can be well approximated by plane waves when viewed over any part of that region that is sufficiently small compared to its distance from the source. That is the case, for example, of the light waves from a distant star that arrive at a telescope. Plane standing wave A standing wave is a field whose value can be expressed as the product of two functions, one depending only on position, the other only on time. A plane standing wave, in particular, can be expressed as where is a function of one scalar parameter (the displacement ) with scalar or vector values, and is a scalar function of time. This representation is not unique, since the same field values are obtained if and are scaled by reciprocal factors. If is bounded in the time interval of interest (which is usually the case in physical contexts), and can be scaled so that the maximum value of is 1. Then will be the maximum field magnitude seen at the point . Properties A plane wave can be studied by ignoring the directions perpendicular to the direction vector ; that is, by considering the function as a wave in a one-dimensional medium. Any local operator, linear or not, applied to a plane wave yields a plane wave. Any linear combination of plane waves with the same normal vector is also a plane wave. For a scalar plane wave in two or three dimensions, the gradient of the field is always collinear with the direction ; specifically, , where is the partial derivative of with respect to the first argument. The divergence of a vector-valued plane wave depends only on the projection of the vector in the direction . Specifically, In particular, a transverse planar wave satisfies for all and .
Physical sciences
Waves
Physics
41564
https://en.wikipedia.org/wiki/Polarization%20%28waves%29
Polarization (waves)
(also ) is a property of transverse waves which specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave. One example of a polarized transverse wave is vibrations traveling along a taut string (see image), for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves (shear waves) in solids. An electromagnetic wave such as light consists of a coupled oscillating electric field and magnetic field which are always perpendicular to each other. Different states of polarization correspond to different relationships between polarization and the direction of propagation. In linear polarization, the fields oscillate in a single direction. In circular or elliptical polarization, the fields rotate at a constant rate in a plane as the wave travels, either in the right-hand or in the left-hand direction. Light or other electromagnetic radiation from many sources, such as the sun, flames, and incandescent lamps, consists of short wave trains with an equal mixture of polarizations; this is called unpolarized light. Polarized light can be produced by passing unpolarized light through a polarizer, which allows waves of only one polarization to pass through. The most common optical materials do not affect the polarization of light, but some materials—those that exhibit birefringence, dichroism, or optical activity—affect light differently depending on its polarization. Some of these are used to make polarizing filters. Light also becomes partially polarized when it reflects at an angle from a surface. According to quantum mechanics, electromagnetic waves can also be viewed as streams of particles called photons. When viewed in this way, the polarization of an electromagnetic wave is determined by a quantum mechanical property of photons called their spin. A photon has one of two possible spins: it can either spin in a right hand sense or a left hand sense about its direction of travel. Circularly polarized electromagnetic waves are composed of photons with only one type of spin, either right- or left-hand. Linearly polarized waves consist of photons that are in a superposition of right and left circularly polarized states, with equal amplitude and phases synchronized to give oscillation in a plane. Polarization is an important parameter in areas of science dealing with transverse waves, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar. Introduction Wave propagation and polarization Most sources of light are classified as incoherent and unpolarized (or only "partially polarized") because they consist of a random mixture of waves having different spatial characteristics, frequencies (wavelengths), phases, and polarization states. However, for understanding electromagnetic waves and polarization in particular, it is easier to just consider coherent plane waves; these are sinusoidal waves of one particular direction (or wavevector), frequency, phase, and polarization state. Characterizing an optical system in relation to a plane wave with those given parameters can then be used to predict its response to a more general case, since a wave with any specified spatial structure can be decomposed into a combination of plane waves (its so-called angular spectrum). Incoherent states can be modeled stochastically as a weighted combination of such uncorrelated waves with some distribution of frequencies (its spectrum), phases, and polarizations. Transverse electromagnetic waves Electromagnetic waves (such as light), traveling in free space or another homogeneous isotropic non-attenuating medium, are properly described as transverse waves, meaning that a plane wave's electric field vector and magnetic field are each in some direction perpendicular to (or "transverse" to) the direction of wave propagation; and are also perpendicular to each other. By convention, the "polarization" direction of an electromagnetic wave is given by its electric field vector. Considering a monochromatic plane wave of optical frequency (light of vacuum wavelength has a frequency of where is the speed of light), let us take the direction of propagation as the axis. Being a transverse wave the and fields must then contain components only in the and directions whereas . Using complex (or phasor) notation, the instantaneous physical electric and magnetic fields are given by the real parts of the complex quantities occurring in the following equations. As a function of time and spatial position (since for a plane wave in the direction the fields have no dependence on or ) these complex fields can be written as: and where is the wavelength (whose refractive index is ) and is the period of the wave. Here , , , and are complex numbers. In the second more compact form, as these equations are customarily expressed, these factors are described using the wavenumber and angular frequency (or "radian frequency") . In a more general formulation with propagation restricted to the direction, then the spatial dependence is replaced by where is called the wave vector, the magnitude of which is the wavenumber. Thus the leading vectors and each contain up to two nonzero (complex) components describing the amplitude and phase of the wave's and polarization components (again, there can be no polarization component for a transverse wave in the direction). For a given medium with a characteristic impedance , is related to by: In a dielectric, is real and has the value , where is the refractive index and is the impedance of free space. The impedance will be complex in a conducting medium. Note that given that relationship, the dot product of and must be zero: indicating that these vectors are orthogonal (at right angles to each other), as expected. Knowing the propagation direction ( in this case) and , one can just as well specify the wave in terms of just and describing the electric field. The vector containing and (but without the component which is necessarily zero for a transverse wave) is known as a Jones vector. In addition to specifying the polarization state of the wave, a general Jones vector also specifies the overall magnitude and phase of that wave. Specifically, the intensity of the light wave is proportional to the sum of the squared magnitudes of the two electric field components: However, the wave's state of polarization is only dependent on the (complex) ratio of to . So let us just consider waves whose ; this happens to correspond to an intensity of about in free space (where ). And because the absolute phase of a wave is unimportant in discussing its polarization state, let us stipulate that the phase of is zero; in other words is a real number while may be complex. Under these restrictions, and can be represented as follows: where the polarization state is now fully parameterized by the value of (such that ) and the relative phase . Non-transverse waves In addition to transverse waves, there are many wave motions where the oscillation is not limited to directions perpendicular to the direction of propagation. These cases are far beyond the scope of the current article which concentrates on transverse waves (such as most electromagnetic waves in bulk media), but one should be aware of cases where the polarization of a coherent wave cannot be described simply using a Jones vector, as we have just done. Just considering electromagnetic waves, we note that the preceding discussion strictly applies to plane waves in a homogeneous isotropic non-attenuating medium, whereas in an anisotropic medium (such as birefringent crystals as discussed below) the electric or magnetic field may have longitudinal as well as transverse components. In those cases the electric displacement and magnetic flux density still obey the above geometry but due to anisotropy in the electric susceptibility (or in the magnetic permeability), now given by a tensor, the direction of (or ) may differ from that of (or ). Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part (or "extinction coefficient") such as metals; these fields are also not strictly transverse. Surface waves or waves propagating in a waveguide (such as an optical fiber) are generally transverse waves, but might be described as an electric or magnetic transverse mode, or a hybrid mode. Even in free space, longitudinal field components can be generated in focal regions, where the plane wave approximation breaks down. An extreme example is radially or tangentially polarized light, at the focus of which the electric or magnetic field respectively is longitudinal (along the direction of propagation). For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so the issue of polarization is normally not even mentioned. On the other hand, sound waves in a bulk solid can be transverse as well as longitudinal, for a total of three polarization components. In this case, the transverse polarization is associated with the direction of the shear stress and displacement in directions perpendicular to the propagation direction, while the longitudinal polarization describes compression of the solid and vibration along the direction of propagation. The differential propagation of transverse and longitudinal polarizations is important in seismology. Polarization state Polarization can be defined in terms of pure polarization states with only a coherent sinusoidal wave at one optical frequency. The vector in the adjacent diagram might describe the oscillation of the electric field emitted by a single-mode laser (whose oscillation frequency would be typically times faster). The field oscillates in the -plane, along the page, with the wave propagating in the direction, perpendicular to the page. The first two diagrams below trace the electric field vector over a complete cycle for linear polarization at two different orientations; these are each considered a distinct state of polarization (SOP). The linear polarization at 45° can also be viewed as the addition of a horizontally linearly polarized wave (as in the leftmost figure) and a vertically polarized wave of the same amplitude . Now if one were to introduce a phase shift in between those horizontal and vertical polarization components, one would generally obtain elliptical polarization as is shown in the third figure. When the phase shift is exactly ±90°, and the amplitudes are the same, then circular polarization is produced (fourth and fifth figures). Circular polarization can be created by sending linearly polarized light through a quarter-wave plate oriented at 45° to the linear polarization to create two components of the same amplitude with the required phase shift. The superposition of the original and phase-shifted components causes a rotating electric field vector, which is depicted in the animation on the right. Note that circular or elliptical polarization can involve either a clockwise or counterclockwise rotation of the field, depending on the relative phases of the components. These correspond to distinct polarization states, such as the two circular polarizations shown above. The orientation of the and axes used in this description is arbitrary. The choice of such a coordinate system and viewing the polarization ellipse in terms of the and polarization components, corresponds to the definition of the Jones vector (below) in terms of those basis polarizations. Axes are selected to suit a particular problem, such as being in the plane of incidence. Since there are separate reflection coefficients for the linear polarizations in and orthogonal to the plane of incidence (p and s polarizations, see below), that choice greatly simplifies the calculation of a wave's reflection from a surface. Any pair of orthogonal polarization states may be used as basis functions, not just linear polarizations. For instance, choosing right and left circular polarizations as basis functions simplifies the solution of problems involving circular birefringence (optical activity) or circular dichroism. Polarization ellipse For a purely polarized monochromatic wave the electric field vector over one cycle of oscillation traces out an ellipse. A polarization state can then be described in relation to the geometrical parameters of the ellipse, and its "handedness", that is, whether the rotation around the ellipse is clockwise or counter clockwise. One parameterization of the elliptical figure specifies the orientation angle , defined as the angle between the major axis of the ellipse and the -axis along with the ellipticity , the ratio of the ellipse's major to minor axis. (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of an ellipse's eccentricity or the ellipticity angle, as is shown in the figure. The angle is also significant in that the latitude (angle from the equator) of the polarization state as represented on the Poincaré sphere (see below) is equal to . The special cases of linear and circular polarization correspond to an ellipticity of infinity and unity (or of zero and 45°) respectively. Jones vector Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization. Coordinate frame Regardless of whether polarization state is represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. s and p designations Another coordinate system frequently used relates to the plane of incidence. This is the plane made by the incoming propagation direction and the vector perpendicular to the plane of an interface, in other words, the plane in which the ray travels before and after reflection or refraction. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from , German for 'perpendicular'). Polarized light with its electric field along the plane of incidence is thus denoted , while light whose electric field is normal to the plane of incidence is called . P-polarization is commonly referred to as transverse-magnetic (TM), and has also been termed pi-polarized or -polarized, or tangential plane polarized. S-polarization is also called transverse-electric (TE), as well as sigma-polarized or σ-polarized, or sagittal plane polarized. Degree of polarization Degree of polarization (DOP) is a quantity used to describe the portion of an electromagnetic wave which is polarized. can be calculated from the Stokes parameters. A perfectly polarized wave has a of 100%, whereas an unpolarized wave has a of 0%. A wave which is partially polarized, and therefore can be represented by a superposition of a polarized and unpolarized component, will have a somewhere in between 0 and 100%. is calculated as the fraction of the total power that is carried by the polarized component of the wave. can be used to map the strain field in materials when considering the of the photoluminescence. The polarization of the photoluminescence is related to the strain in a material by way of the given material's photoelasticity tensor. is also visualized using the Poincaré sphere representation of a polarized beam. In this representation, is equal to the length of the vector measured from the center of the sphere. Unpolarized and partially polarized light Implications for reflection and propagation Polarization in wave propagation In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector of a plane wave in the direction follows: where is the wavenumber. As noted above, the instantaneous electric field is the real part of the product of the Jones vector times the phase factor When an electromagnetic wave interacts with matter, its propagation is altered according to the material's (complex) index of refraction. When the real or imaginary part of that refractive index is dependent on the polarization state of a wave, properties known as birefringence and polarization dichroism (or diattenuation) respectively, then the polarization state of a wave will generally be altered. In such media, an electromagnetic wave with any given state of polarization may be decomposed into two orthogonally polarized components that encounter different propagation constants. The effect of propagation over a given path on those two components is most easily characterized in the form of a complex transformation matrix known as a Jones matrix: The Jones matrix due to passage through a transparent material is dependent on the propagation distance as well as the birefringence. The birefringence (as well as the average refractive index) will generally be dispersive, that is, it will vary as a function of optical frequency (wavelength). In the case of non-birefringent materials, however, the Jones matrix is the identity matrix (multiplied by a scalar phase factor and attenuation factor), implying no change in polarization during propagation. For propagation effects in two orthogonal modes, the Jones matrix can be written as where and are complex numbers describing the phase delay and possibly the amplitude attenuation due to propagation in each of the two polarization eigenmodes. is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors; in the case of linear birefringence or diattenuation the modes are themselves linear polarization states so and can be omitted if the coordinate axes have been chosen appropriately. Birefringence In a birefringent substance, electromagnetic waves of different polarizations travel at different speeds (phase velocities). As a result, when unpolarized waves travel through a plate of birefringent material, one polarization component has a shorter wavelength than the other, resulting in a phase difference between the components which increases the further the waves travel through the material. The Jones matrix is a unitary matrix: . Media termed diattenuating (or dichroic in the sense of polarization), in which only the amplitudes of the two polarizations are affected differentially, may be described using a Hermitian matrix (generally multiplied by a common phase factor). In fact, since matrix may be written as the product of unitary and positive Hermitian matrices, light propagation through any sequence of polarization-dependent optical components can be written as the product of these two basic types of transformations. In birefringent media there is no attenuation, but two modes accrue a differential phase delay. Well known manifestations of linear birefringence (that is, in which the basis polarizations are orthogonal linear polarizations) appear in optical wave plates/retarders and many crystals. If linearly polarized light passes through a birefringent material, its state of polarization will generally change, its polarization direction is identical to one of those basis polarizations. Since the phase shift, and thus the change in polarization state, is usually wavelength-dependent, such objects viewed under white light in between two polarizers may give rise to colorful effects, as seen in the accompanying photograph. Circular birefringence is also termed optical activity, especially in chiral fluids, or Faraday rotation, when due to the presence of a magnetic field along the direction of propagation. When linearly polarized light is passed through such an object, it will exit still linearly polarized, but with the axis of polarization rotated. A combination of linear and circular birefringence will have as basis polarizations two orthogonal elliptical polarizations; however, the term "elliptical birefringence" is rarely used. One can visualize the case of linear birefringence (with two orthogonal linear propagation modes) with an incoming wave linearly polarized at a 45° angle to those modes. As a differential phase starts to accrue, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) perpendicular to the original polarization, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes. Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, the two polarization components of a collimated beam (or ray) can exit the material with a positional offset, even though their final propagation directions will be the same (assuming the entrance face and exit face are parallel). This is commonly viewed using calcite crystals, which present the viewer with two slightly offset images, in opposite polarizations, of an object behind the crystal. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. Dichroism Media in which transmission of one polarization mode is preferentially reduced are called dichroic or diattenuating. Like birefringence, diattenuation can be with respect to linear polarization modes (in a crystal) or circular polarization modes (usually in a liquid). Devices that block nearly all of the radiation in one mode are known as or simply "polarizers". This corresponds to in the above representation of the Jones matrix. The output of an ideal polarizer is a specific polarization state (usually linear polarization) with an amplitude equal to the input wave's original amplitude in that polarization mode. Power in the other polarization mode is eliminated. Thus if unpolarized light is passed through an ideal polarizer (where and ) exactly half of its initial power is retained. Practical polarizers, especially inexpensive sheet polarizers, have additional loss so that . However, in many instances the more relevant figure of merit is the polarizer's degree of polarization or extinction ratio, which involve a comparison of to . Since Jones vectors refer to waves' amplitudes (rather than intensity), when illuminated by unpolarized light the remaining power in the unwanted polarization will be of the power in the intended polarization. Specular reflection In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected; for a given material those proportions (and also the phase of reflection) are dependent on the angle of incidence and are different for the s- and p-polarizations. Therefore, the polarization state of reflected light (even if initially unpolarized) is generally changed. Any light striking a surface at a special angle of incidence known as Brewster's angle, where the reflection coefficient for p-polarization is zero, will be reflected with only the s-polarization remaining. This principle is employed in the so-called "pile of plates polarizer" (see figure) in which part of the s-polarization is removed by reflection at each Brewster angle surface, leaving only the p-polarization after transmission through many such surfaces. The generally smaller reflection coefficient of the p-polarization is also the basis of polarized sunglasses; by blocking the s- (horizontal) polarization, most of the glare due to reflection from a wet street, for instance, is removed. In the important special case of reflection at normal incidence (not involving anisotropic materials) there is no particular s- or p-polarization. Both the and polarization components are reflected identically, and therefore the polarization of the reflected wave is identical to that of the incident wave. However, in the case of circular (or elliptical) polarization, the handedness of the polarization state is thereby reversed, since by convention this is specified relative to the direction of propagation. The circular rotation of the electric field around the axes called "right-handed" for a wave in the direction is "left-handed" for a wave in the direction. But in the general case of reflection at a nonzero angle of incidence, no such generalization can be made. For instance, right-circularly polarized light reflected from a dielectric surface at a grazing angle, will still be right-handed (but elliptically) polarized. Linear polarized light reflected from a metal at non-normal incidence will generally become elliptically polarized. These cases are handled using Jones vectors acted upon by the different Fresnel coefficients for the s- and p-polarization components. Measurement techniques involving polarization Some optical measurement techniques are based on polarization. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention. Measurement of stress In engineering, the phenomenon of stress induced birefringence allows for stresses in transparent materials to be readily observed. As noted above and seen in the accompanying photograph, the chromaticity of birefringence typically creates colored patterns when viewed in between two polarizers. As external forces are applied, internal stress induced in the material is thereby observed. Additionally, birefringence is frequently observed due to stresses "frozen in" at the time of manufacture. This is famously observed in cellophane tape whose birefringence is due to the stretching of the material during the manufacturing process. Ellipsometry Ellipsometry is a powerful technique for the measurement of the optical properties of a uniform surface. It involves measuring the polarization state of light following specular reflection from such a surface. This is typically done as a function of incidence angle or wavelength (or both). Since ellipsometry relies on reflection, it is not required for the sample to be transparent to light or for its back side to be accessible. Ellipsometry can be used to model the (complex) refractive index of a surface of a bulk material. It is also very useful in determining parameters of one or more thin film layers deposited on a substrate. Due to their reflection properties, not only are the predicted magnitude of the p and s polarization components, but their relative phase shifts upon reflection, compared to measurements using an ellipsometer. A normal ellipsometer does not measure the actual reflection coefficient (which requires careful photometric calibration of the illuminating beam) but the ratio of the p and s reflections, as well as change of polarization ellipticity (hence the name) induced upon reflection by the surface being studied. In addition to use in science and research, ellipsometers are used in situ to control production processes for instance. Geology The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details. Sound waves in solid materials exhibit polarization. Differential propagation of the three polarizations through the earth is a crucial in the field of seismology. Horizontally and vertically polarized seismic waves (shear waves) are termed SH and SV, while waves with longitudinal polarization (compressional waves) are termed P-waves. Autopsy Similarly, polarization microscopes can be used to aid in the detection of foreign matter in biological tissue slices if it is birefringent; autopsies often mention (a lack of or presence of) "polarizable foreign debris." Chemistry We have seen (above) that the birefringence of a type of crystal is useful in identifying it, and thus detection of linear birefringence is especially useful in geology and mineralogy. Linearly polarized light generally has its polarization state altered upon transmission through such a crystal, making it stand out when viewed in between two crossed polarizers, as seen in the photograph, above. Likewise, in chemistry, rotation of polarization axes in a liquid solution can be a useful measurement. In a liquid, linear birefringence is impossible, but there may be circular birefringence when a chiral molecule is in solution. When the right and left handed enantiomers of such a molecule are present in equal numbers (a so-called racemic mixture) then their effects cancel out. However, when there is only one (or a preponderance of one), as is more often the case for organic molecules, a net circular birefringence (or optical activity) is observed, revealing the magnitude of that imbalance (or the concentration of the molecule itself, when it can be assumed that only one enantiomer is present). This is measured using a polarimeter in which polarized light is passed through a tube of the liquid, at the end of which is another polarizer which is rotated in order to null the transmission of light through it. Astronomy In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarized. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth, but chirality selection on inorganic crystals has been proposed as an alternative theory. Applications and examples Polarized sunglasses Unpolarized light, after being reflected by a specular (shiny) surface, generally obtains a degree of polarization. This phenomenon was observed in the early 1800s by the mathematician Étienne-Louis Malus, after whom Malus's law is named. Polarizing sunglasses exploit this effect to reduce glare from reflections by horizontal surfaces, notably the road ahead viewed at a grazing angle. Wearers of polarized sunglasses will occasionally observe inadvertent polarization effects such as color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics, in conjunction with natural polarization by reflection or scattering. The polarized light from LCD monitors (see below) is extremely conspicuous when these are worn. Sky polarization and photography Polarization is observed in the light of the sky, as this is due to sunlight scattered by aerosols as it passes through Earth's atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is most strongly observed at points on the sky making a 90° angle to the Sun. Polarizing filters use these effects to optimize the results of photographing scenes in which reflection or scattering by the sky is involved. Sky polarization has been used for orientation in navigation. The Pfund sky compass was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass from Asia to Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century. Display technologies The principle of liquid-crystal display (LCD) technology relies on the rotation of the axis of linear polarization by the liquid crystal array. Light from the backlight (or the back reflective layer, in devices not including or requiring a backlight) first passes through a linear polarizing sheet. That polarized light passes through the actual liquid crystal layer which may be organized in pixels (for a TV or computer monitor) or in another format such as a seven-segment display or one with custom symbols for a particular product. The liquid crystal layer is produced with a consistent right (or left) handed chirality, essentially consisting of tiny helices. This causes circular birefringence, and is engineered so that there is a 90 degree rotation of the linear polarization state. However, when a voltage is applied across a cell, the molecules straighten out, lessening or totally losing the circular birefringence. On the viewing side of the display is another linear polarizing sheet, usually oriented at 90 degrees from the one behind the active layer. Therefore, when the circular birefringence is removed by the application of a sufficient voltage, the polarization of the transmitted light remains at right angles to the front polarizer, and the pixel appears dark. With no voltage, however, the 90 degree rotation of the polarization causes it to exactly match the axis of the front polarizer, allowing the light through. Intermediate voltages create intermediate rotation of the polarization axis and the pixel has an intermediate intensity. Displays based on this principle are widespread, and now are used in the vast majority of televisions, computer monitors and video projectors, rendering the previous CRT technology essentially obsolete. The use of polarization in the operation of LCD displays is immediately apparent to someone wearing polarized sunglasses, often making the display unreadable. In a totally different sense, polarization encoding has become the leading (but not sole) method for delivering separate images to the left and right eye in stereoscopic displays used for 3D movies. This involves separate images intended for each eye either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarizing filters ensure that each eye receives only the intended image. Historically such systems used linear polarization encoding because it was inexpensive and offered good separation. However, circular polarization makes separation of the two images insensitive to tilting of the head, and is widely used in 3-D movie exhibition today, such as the system from RealD. Projecting such images requires screens that maintain the polarization of the projected light when viewed in reflection (such as silver screens); a normal diffuse white projection screen causes depolarization of the projected images, making it unsuitable for this application. Although now obsolete, CRT computer displays suffered from reflection by the glass envelope, causing glare from room lights and consequently poor contrast. Several anti-reflection solutions were employed to ameliorate this problem. One solution utilized the principle of reflection of circularly polarized light. A circular polarizing filter in front of the screen allows for the transmission of (say) only right circularly polarized room light. Now, right circularly polarized light (depending on the convention used) has its electric (and magnetic) field direction rotating clockwise while propagating in the +z direction. Upon reflection, the field still has the same direction of rotation, but now propagation is in the −z direction making the reflected wave left circularly polarized. With the right circular polarization filter placed in front of the reflecting glass, the unwanted light reflected from the glass will thus be in very polarization state that is blocked by that filter, eliminating the reflection problem. The reversal of circular polarization on reflection and elimination of reflections in this manner can be easily observed by looking in a mirror while wearing 3-D movie glasses which employ left- and right-handed circular polarization in the two lenses. Closing one eye, the other eye will see a reflection in which it cannot see itself; that lens appears black. However, the other lens (of the closed eye) will have the correct circular polarization allowing the closed eye to be easily seen by the open one. Radio transmission and reception All radio (and microwave) antennas used for transmitting or receiving are intrinsically polarized. They transmit in (or receive signals from) a particular polarization, being totally insensitive to the opposite polarization; in certain cases that polarization is a function of direction. Most antennas are nominally linearly polarized, but elliptical and circular polarization is a possibility. In the case of linear polarization, the same kind of filtering as described above, is possible. In the case of elliptical polarization (circular polarization is in reality just a kind of elliptical polarization where the length of both elasticity factors is the same), filtering out a single angle (e.g. 90°) will have virtually no impact as the wave at any time can be in any of the 360 degrees. The vast majority of antennas are linearly polarized. In fact it can be shown from considerations of symmetry that an antenna that lies entirely in a plane which also includes the observer, can only have its polarization in the direction of that plane. This applies to many cases, allowing one to easily infer such an antenna's polarization at an intended direction of propagation. So a typical rooftop Yagi or log-periodic antenna with horizontal conductors, as viewed from a second station toward the horizon, is necessarily horizontally polarized. But a vertical "whip antenna" or AM broadcast tower used as an antenna element (again, for observers horizontally displaced from it) will transmit in the vertical polarization. A turnstile antenna with its four arms in the horizontal plane, likewise transmits horizontally polarized radiation toward the horizon. However, when that same turnstile antenna is used in the "axial mode" (upwards, for the same horizontally-oriented structure) its radiation is circularly polarized. At intermediate elevations it is elliptically polarized. Polarization is important in radio communications because, for instance, if one attempts to use a horizontally polarized antenna to receive a vertically polarized transmission, the signal strength will be substantially reduced (or under very controlled conditions, reduced to nothing). This principle is used in satellite television in order to double the channel capacity over a fixed frequency band. The same frequency channel can be used for two signals broadcast in opposite polarizations. By adjusting the receiving antenna for one or the other polarization, either signal can be selected without interference from the other. Especially due to the presence of the ground, there are some differences in propagation (and also in reflections responsible for TV ghosting) between horizontal and vertical polarizations. AM and FM broadcast radio usually use vertical polarization, while television uses horizontal polarization. At low frequencies especially, horizontal polarization is avoided. That is because the phase of a horizontally polarized wave is reversed upon reflection by the ground. A distant station in the horizontal direction will receive both the direct and reflected wave, which thus tend to cancel each other. This problem is avoided with vertical polarization. Polarization is also important in the transmission of radar pulses and reception of radar reflections by the same or a different antenna. For instance, back scattering of radar pulses by rain drops can be avoided by using circular polarization. Just as specular reflection of circularly polarized light reverses the handedness of the polarization, as discussed above, the same principle applies to scattering by objects much smaller than a wavelength such as rain drops. On the other hand, reflection of that wave by an irregular metal object (such as an airplane) will typically introduce a change in polarization and (partial) reception of the return wave by the same antenna. The effect of free electrons in the ionosphere, in conjunction with the earth's magnetic field, causes Faraday rotation, a sort of circular birefringence. This is the same mechanism which can rotate the axis of linear polarization by electrons in interstellar space as mentioned below. The magnitude of Faraday rotation caused by such a plasma is greatly exaggerated at lower frequencies, so at the higher microwave frequencies used by satellites the effect is minimal. However, medium or short wave transmissions received following refraction by the ionosphere are strongly affected. Since a wave's path through the ionosphere and the earth's magnetic field vector along such a path are rather unpredictable, a wave transmitted with vertical (or horizontal) polarization will generally have a resulting polarization in an arbitrary orientation at the receiver. Polarization and vision Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth. The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye. Angular momentum using circular polarization It is well known that electromagnetic radiation carries a certain linear momentum in the direction of propagation. In addition, however, light carries a certain angular momentum if it is circularly polarized (or partially so). In comparison with lower frequencies such as microwaves, the amount of angular momentum in light, even of pure circular polarization, compared to the same wave's linear momentum (or radiation pressure) is very small and difficult to even measure. However, it was utilized in an experiment to achieve speeds of up to 600 million revolutions per minute.
Physical sciences
Optics
null
41600
https://en.wikipedia.org/wiki/Pulse
Pulse
In medicine, the pulse is the rhythmic throbbing of each artery in response to the cardiac cycle (heartbeat). The pulse may be palpated in any place that allows an artery to be compressed near the surface of the body, such as at the neck (carotid artery), wrist (radial artery or ulnar artery), at the groin (femoral artery), behind the knee (popliteal artery), near the ankle joint (posterior tibial artery), and on foot (dorsalis pedis artery). The pulse is most commonly measured at the wrist or neck. A sphygmograph is an instrument for measuring the pulse. Physiology Claudius Galen was perhaps the first physiologist to describe the pulse. The pulse is an expedient tactile method of determination of systolic blood pressure to a trained observer. Diastolic blood pressure is non-palpable and unobservable by tactile methods, occurring between heartbeats. Pressure waves generated by the heart in systole move the arterial walls. Forward movement of blood occurs when the boundaries are pliable and compliant. These properties form enough to create a palpable pressure wave. Pulse velocity, pulse deficits and much more physiologic data are readily and simplistically visualized by the use of one or more arterial catheters connected to a transducer and oscilloscope. This invasive technique has been commonly used in intensive care since the 1970s. The pulse may be further indirectly observed under light absorbances of varying wavelengths with assigned and inexpensively reproduced mathematical ratios. Applied capture of variances of light signal from the blood component hemoglobin under oxygenated vs. deoxygenated conditions allows the technology of pulse oximetry. Characteristics Rate The rate of the pulse can be observed and measured on the outside of an artery by tactile or visual means. It is recorded as arterial beats per minute or BPM. Although the pulse and heart beat are related, they are not the same. For example, there is a delay between the onset of the heart beat and the onset of the pulse, known as the pulse transit time, which varies by site. Similarly measurements of heart rate variability and pulse rate variability differ. In healthy people, the pulse rate is close to the heart rate, as measured by ECG. Measuring the pulse rate is therefore a convenient way to estimate the heart rate. Pulse deficit is a condition in which a person has a difference between their pulse rate and heart rate. It can be observed by simultaneous palpation at the radial artery and auscultation using a stethoscope at the PMI, near the heart apex, for example. Typically, in people with pulse deficit, heart beats do not result in pulsations at the periphery, meaning the pulse rate is lower than the heart rate. Pulse deficit has been found to be significant in the context of premature ventricular contraction and atrial fibrillation. Rhythm A normal pulse is regular in rhythm and force. An irregular pulse may be due to sinus arrhythmia, ectopic beats, atrial fibrillation, paroxysmal atrial tachycardia, atrial flutter, partial heart block etc. Intermittent dropping out of beats at pulse is called "intermittent pulse". Examples of regular intermittent (regularly irregular) pulse include pulsus bigeminus, second-degree atrioventricular block. An example of irregular intermittent (irregularly irregular) pulse is atrial fibrillation. Volume The degree of expansion displayed by artery during diastolic and systolic state is called volume. It is also known as amplitude, expansion or size of pulse. Hypokinetic pulse A weak pulse signifies narrow pulse pressure. It may be due to low cardiac output (as seen in shock, congestive cardiac failure), hypovolemia, valvular heart disease (such as aortic outflow tract obstruction, mitral stenosis, aortic arch syndrome) etc. Hyperkinetic pulse A bounding pulse signifies high pulse pressure. It may be due to low peripheral resistance (as seen in fever, anemia, thyrotoxicosis, , A-V fistula, Paget's disease, beriberi, liver cirrhosis), increased cardiac output, increased stroke volume (as seen in anxiety, exercise, complete heart block, aortic regurgitation), decreased distensibility of arterial system (as seen in atherosclerosis, hypertension and coarctation of aorta). The strength of the pulse can also be reported: 0 = Absent 1 = Barely palpable 2 = Easily palpable 3 = Full 4 = Aneurysmal or bounding pulse Force Also known as compressibility of pulse. It is a rough indication of systolic blood pressure. Tension Determined mainly by mean arterial blood pressure [edited by Elmoghazy] & It corresponds to diastolic blood pressure. A low tension pulse (pulsus mollis), the vessel is soft or impalpable between beats. In high tension pulse (pulsus durus), vessels feel rigid even between pulse beats. Form A form or contour of a pulse is palpatory estimation of arteriogram. A quickly rising and quickly falling pulse (pulsus celer) is seen in aortic regurgitation. A slow rising and slowly falling pulse (pulsus tardus) is seen in aortic stenosis. Equality Comparing pulses and different places gives valuable clinical information. A discrepant or unequal pulse between left and right radial artery is observed in anomalous or aberrant course of artery, coarctation of aorta, aortitis, dissecting aneurysm, peripheral embolism etc. An unequal pulse between upper and lower extremities is seen in coarctation to aorta, aortitis, block at bifurcation of aorta, dissection of aorta, iatrogenic trauma and arteriosclerotic obstruction. Condition of arterial wall A normal artery is not palpable after flattening by digital pressure. A thick radial artery which is palpable 7.5–10 cm up the forearm is suggestive of arteriosclerosis. Radio-femoral delay In coarctation of aorta, femoral pulse may be significantly delayed as compared to radial pulse (unless there is coexisting aortic regurgitation). The delay can also be observed in supravalvar aortic stenosis. Patterns Several pulse patterns can be of clinical significance. These include: Anacrotic pulse: notch on the upstroke of the carotid pulse. Two distinct waves (slow initial upstroke and delayed peak, which is close to S2). Present in AS. Dicrotic pulse: is characterized by two beats per cardiac cycle, one systolic and the other diastolic. Physiologically, the dicrotic wave is the result of reflected waves from the lower extremities and aorta. Conditions associated with low cardiac output and high systemic vascular resistance can produce a dicrotic pulse. Pulse deficit: difference in the heart rate by direct cardiac ausculation and by palpation of the peripheral arterial pulse rate when in atrial fibrillation (AF). Pulsus alternans: an ominous medical sign that indicates progressive systolic heart failure. To trained fingertips, the examiner notes a pattern of a strong pulse followed by a weak pulse over and over again. This pulse signals a flagging effort of the heart to sustain itself in systole. It also can be detected in HCM with obstruction. Pulsus bigeminus: indicates a pair of hoofbeats within each heartbeat. Concurrent auscultation of the heart may reveal a gallop rhythm of the native heartbeat. Pulsus bisferiens: is characterized by two beats per cardiac cycle, both systolic, unlike the dicrotic pulse. It is an unusual physical finding typically seen in patients with aortic valve diseases if the aortic valve does not normally open and close. Trained fingertips will observe two pulses to each heartbeat instead of one. Pulsus tardus et parvus, also pulsus parvus et tardus, slow-rising pulse and anacrotic pulse, is weak (parvus), and late (tardus) relative to its expected characteristics. It is caused by a stiffened aortic valve that makes it progressively harder to open, thus requiring increased generation of blood pressure in the left ventricle. It is seen in aortic valve stenosis. Pulsus paradoxus: a condition in which some heartbeats cannot be detected at the radial artery during the inspiration phase of respiration. It is caused by an exaggerated decrease in blood pressure during this phase, and is diagnostic of a variety of cardiac and respiratory conditions of varying urgency, such as cardiac tamponade. Tachycardia: an elevated resting heart rate. In general an electrocardiogram (ECG) is required to identify the type of tachycardia. Pulsatile This description of the pulse implies the intrinsic physiology of systole and diastole. Scientifically, systole and diastole are forces that expand and contract the pulmonary and systemic circulations. A collapsing pulse is a sign of hyperdynamic circulation, which can be seen in AR or PDA. Common palpable sites Sites can be divided into peripheral pulses and central pulses. Central pulses include the carotid, femoral, and brachial pulses. Upper limb Axillary pulse: located inferiorly of the lateral wall of the axilla Brachial pulse: located on the inside of the upper arm near the elbow, frequently used in place of carotid pulse in infants (brachial artery) Radial pulse: located on the lateral of the wrist (radial artery). It can also be found in the anatomical snuff box. Commonly, the radial pulse is measured with three fingers. The finger closest to the heart is used to occlude the pulse pressure, the middle finger is used get a crude estimate of the blood pressure, and the finger most distal to the heart (usually the ring finger) is used to nullify the effect of the ulnar pulse as the two arteries are connected via the palmar arches (superficial and deep). Ulnar pulse: located on the medial of the wrist (ulnar artery). Lower limb Femoral pulse: located in the inner thigh, at the mid-inguinal point, halfway between the pubic symphysis and anterior superior iliac spine (femoral artery). Popliteal pulse: Above the knee in the popliteal fossa, found by holding the bent knee. The patient bends the knee at approximately 124°, and the health care provider holds it in both hands to find the popliteal artery in the pit behind the knee (popliteal artery). Dorsalis pedis pulse: located on top of the foot, immediately lateral to the extensor of hallucis longus (dorsalis pedis artery). Tibialis posterior pulse: located on the medial side of the ankle, 2 cm inferior and 2 cm posterior to the medial malleolus (posterior tibial artery). It is easily palpable over Pimenta's Point. Head and neck Carotid pulse: located in the neck (carotid artery). The carotid artery should be palpated gently and while the patient is sitting or lying down. Stimulating its baroreceptors with low palpitation can provoke severe bradycardia or even stop the heart in some sensitive persons. Also, a person's two carotid arteries should not be palpated at the same time. Doing so may limit the flow of blood to the head, possibly leading to fainting or brain ischemia. It can be felt between the anterior border of the sternocleidomastoid muscle, above the hyoid bone and lateral to the thyroid cartilage. Facial pulse: located on the mandible (lower jawbone) on a line with the corners of the mouth (facial artery). Temporal pulse: located on the temple directly in front of the ear (superficial temporal artery). Although the pulse can be felt in multiple places in the head, people should not normally hear their heartbeats within the head. This is called pulsatile tinnitus, and it can indicate several medical disorders. Torso Apical pulse: located in the 5th left intercostal space, 1.25 cm lateral to the mid-clavicular line. In contrast with other pulse sites, the apical pulse site is unilateral, and measured not under an artery, but below the heart itself (more specifically, the apex of the heart).
Biology and health sciences
Medical procedures
null
41625
https://en.wikipedia.org/wiki/Radiometry
Radiometry
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting. The use of radiometers to determine the temperature of objects and gasses by measuring radiation flux is called pyrometry. Handheld pyrometer devices are often marketed as infrared thermometers. Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term. Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength. Radiometric quantities Integral and spectral radiometric quantities Integral quantities (like radiant flux) describe the total effect of radiation of all wavelengths or frequencies, while spectral quantities (like spectral power) describe the effect of radiation of a single wavelength or frequency . To each integral quantity there are corresponding spectral quantities, defined as the quotient of the integrated quantity by the range of frequency or wavelength considered. For example, the radiant flux Φe corresponds to the spectral power Φe, and Φe,. Getting an integral quantity's spectral counterpart requires a limit transition. This comes from the idea that the precisely requested wavelength photon existence probability is zero. Let us show the relation between them using the radiant flux as an example: Integral flux, whose unit is W: Spectral flux by wavelength, whose unit is : where is the radiant flux of the radiation in a small wavelength interval . The area under a plot with wavelength horizontal axis equals to the total radiant flux. Spectral flux by frequency, whose unit is : where is the radiant flux of the radiation in a small frequency interval . The area under a plot with frequency horizontal axis equals to the total radiant flux. The spectral quantities by wavelength and frequency are related to each other, since the product of the two variables is the speed of light (): or or The integral quantity can be obtained by the spectral quantity's integration:
Physical sciences
Electromagnetic radiation
Physics
41638
https://en.wikipedia.org/wiki/Cycloid
Cycloid
In geometry, a cycloid is the curve traced by a point on a circle as it rolls along a straight line without slipping. A cycloid is a specific form of trochoid and is an example of a roulette, a curve generated by a curve rolling on another curve. The cycloid, with the cusps pointing upward, is the curve of fastest descent under uniform gravity (the brachistochrone curve). It is also the form of a curve for which the period of an object in simple harmonic motion (rolling up and down repetitively) along the curve does not depend on the object's starting position (the tautochrone curve). In physics, when a charged particle at rest is put under a uniform electric and magnetic field perpendicular to one another, the particle’s trajectory draws out a cycloid. History The cycloid has been called "The Helen of Geometers" as, like Helen of Troy, it caused frequent quarrels among 17th-century mathematicians, while Sarah Hart sees it named as such "because the properties of this curve are so beautiful". Historians of mathematics have proposed several candidates for the discoverer of the cycloid. Mathematical historian Paul Tannery speculated that such a simple curve must have been known to the ancients, citing similar work by Carpus of Antioch described by Iamblichus. English mathematician John Wallis writing in 1679 attributed the discovery to Nicholas of Cusa, but subsequent scholarship indicates that either Wallis was mistaken or the evidence he used is now lost. Galileo Galilei's name was put forward at the end of the 19th century and at least one author reports credit being given to Marin Mersenne. Beginning with the work of Moritz Cantor and Siegmund Günther, scholars now assign priority to French mathematician Charles de Bovelles based on his description of the cycloid in his Introductio in geometriam, published in 1503. In this work, Bovelles mistakes the arch traced by a rolling wheel as part of a larger circle with a radius 120% larger than the smaller wheel. Galileo originated the term cycloid and was the first to make a serious study of the curve. According to his student Evangelista Torricelli, in 1599 Galileo attempted the quadrature of the cycloid (determining the area under the cycloid) with an unusually empirical approach that involved tracing both the generating circle and the resulting cycloid on sheet metal, cutting them out and weighing them. He discovered the ratio was roughly 3:1, which is the true value, but he incorrectly concluded the ratio was an irrational fraction, which would have made quadrature impossible. Around 1628, Gilles Persone de Roberval likely learned of the quadrature problem from Père Marin Mersenne and effected the quadrature in 1634 by using Cavalieri's Theorem. However, this work was not published until 1693 (in his Traité des Indivisibles). Constructing the tangent of the cycloid dates to August 1638 when Mersenne received unique methods from Roberval, Pierre de Fermat and René Descartes. Mersenne passed these results along to Galileo, who gave them to his students Torricelli and Viviani, who were able to produce a quadrature. This result and others were published by Torricelli in 1644, which is also the first printed work on the cycloid. This led to Roberval charging Torricelli with plagiarism, with the controversy cut short by Torricelli's early death in 1647. In 1658, Blaise Pascal had given up mathematics for theology but, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest. Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Roberval and Senator Carcavy were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) was judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractatus Duo, giving Wren priority for the first published proof. Fifteen years later, Christiaan Huygens had deployed the cycloidal pendulum to improve chronometers and had discovered that a particle would traverse a segment of an inverted cycloidal arch in the same amount of time, regardless of its starting point. In 1686, Gottfried Wilhelm Leibniz used analytic geometry to describe the curve with a single equation. In 1696, Johann Bernoulli posed the brachistochrone problem, the solution of which is a cycloid. Equations The cycloid through the origin, generated by a circle of radius rolling over the -axis on the positive side (), consists of the points , with where is a real parameter corresponding to the angle through which the rolling circle has rotated. For given , the circle's centre lies at . The Cartesian equation is obtained by solving the -equation for and substituting into the -equation:or, eliminating the multiple-valued inverse cosine:When is viewed as a function of , the cycloid is differentiable everywhere except at the cusps on the -axis, with the derivative tending toward or near a cusp (where ). The map from to is differentiable, in fact of class ∞, with derivative 0 at the cusps. The slope of the tangent to the cycloid at the point is given by . A cycloid segment from one cusp to the next is called an arch of the cycloid, for example the points with and . Considering the cycloid as the graph of a function , it satisfies the differential equation: If we define as the height difference from the cycloid's vertex (the point with a horizontal tangent and ), then we have: Involute The involute of the cycloid has exactly the same shape as the cycloid it originates from. This can be visualized as the path traced by the tip of a wire initially lying on a half arch of the cycloid: as it unrolls while remaining tangent to the original cycloid, it describes a new cycloid (see also cycloidal pendulum and arc length). Demonstration This demonstration uses the rolling-wheel definition of cycloid, as well as the instantaneous velocity vector of a moving point, tangent to its trajectory. In the adjacent picture, and are two points belonging to two rolling circles, with the base of the first just above the top of the second. Initially, and coincide at the intersection point of the two circles. When the circles roll horizontally with the same speed, and traverse two cycloid curves. Considering the red line connecting and at a given time, one proves the line is always tangent to the lower arc at and orthogonal to the upper arc at . Let be the point in common between the upper and lower circles at the given time. Then: are colinear: indeed the equal rolling speed gives equal angles , and thus . The point lies on the line therefore and analogously . From the equality of and one has that also . It follows . If is the meeting point between the perpendicular from to the line segment and the tangent to the circle at , then the triangle is isosceles, as is easily seen from the construction: and . For the previous noted equality between and then and is isosceles. Drawing from the orthogonal segment to , from the straight line tangent to the upper circle, and calling the meeting point, one sees that is a rhombus using the theorems on angles between parallel lines Now consider the velocity of . It can be seen as the sum of two components, the rolling velocity and the drifting velocity , which are equal in modulus because the circles roll without skidding. is parallel to , while is tangent to the lower circle at and therefore is parallel to . The rhombus constituted from the components and is therefore similar (same angles) to the rhombus because they have parallel sides. Then , the total velocity of , is parallel to because both are diagonals of two rhombuses with parallel sides and has in common with the contact point . Thus the velocity vector lies on the prolongation of . Because is tangent to the cycloid at , it follows that also coincides with the tangent to the lower cycloid at . Analogously, it can be easily demonstrated that is orthogonal to (the other diagonal of the rhombus). This proves that the tip of a wire initially stretched on a half arch of the lower cycloid and fixed to the upper circle at will follow the point along its path without changing its length because the speed of the tip is at each moment orthogonal to the wire (no stretching or compression). The wire will be at the same time tangent at to the lower arc because of the tension and the facts demonstrated above. (If it were not tangent there would be a discontinuity at and consequently unbalanced tension forces.) Area Using the above parameterization , the area under one arch, is given by: This is three times the area of the rolling circle. Arc length The arc length of one arch is given by Another geometric way to calculate the length of the cycloid is to notice that when a wire describing an involute has been completely unwrapped from half an arch, it extends itself along two diameters, a length of . This is thus equal to half the length of arch, and that of a complete arch is . From the cycloid's vertex (the point with a horizontal tangent and ) to any point within the same arch, the arc length squared is , which is proportional to the height difference ; this property is the basis for the cycloid's isochronism. In fact, the arc length squared is equal to the height difference multiplied by the full arch length . Cycloidal pendulum If a simple pendulum is suspended from the cusp of an inverted cycloid, such that the string is constrained to be tangent to one of its arches, and the pendulum's length L is equal to that of half the arc length of the cycloid (i.e., twice the diameter of the generating circle, L = 4r), the bob of the pendulum also traces a cycloid path. Such a pendulum is isochronous, with equal-time swings regardless of amplitude. Introducing a coordinate system centred in the position of the cusp, the equation of motion is given by: where is the angle that the straight part of the string makes with the vertical axis, and is given by where is the "amplitude", is the radian frequency of the pendulum and g the gravitational acceleration. The 17th-century Dutch mathematician Christiaan Huygens discovered and proved these properties of the cycloid while searching for more accurate pendulum clock designs to be used in navigation. Related curves Several curves are related to the cycloid. Trochoid: generalization of a cycloid in which the point tracing the curve may be inside the rolling circle (curtate) or outside (prolate). Hypocycloid: variant of a cycloid in which a circle rolls on the inside of another circle instead of a line. Epicycloid: variant of a cycloid in which a circle rolls on the outside of another circle instead of a line. Hypotrochoid: generalization of a hypocycloid where the generating point may not be on the edge of the rolling circle. Epitrochoid: generalization of an epicycloid where the generating point may not be on the edge of the rolling circle. All these curves are roulettes with a circle rolled along another curve of uniform curvature. The cycloid, epicycloids, and hypocycloids have the property that each is similar to its evolute. If q is the product of that curvature with the circle's radius, signed positive for epi- and negative for hypo-, then the similitude ratio of curve to evolute is 1 + 2q. The classic Spirograph toy traces out hypotrochoid and epitrochoid curves. Other uses The cycloidal arch was used by architect Louis Kahn in his design for the Kimbell Art Museum in Fort Worth, Texas. It was also used by Wallace K. Harrison in the design of the Hopkins Center at Dartmouth College in Hanover, New Hampshire. Early research indicated that some transverse arching curves of the plates of golden age violins are closely modeled by curtate cycloid curves. Later work indicates that curtate cycloids do not serve as general models for these curves, which vary considerably.
Mathematics
Two-dimensional space
null
41641
https://en.wikipedia.org/wiki/Reflection%20coefficient
Reflection coefficient
In physics and electrical engineering the reflection coefficient is a parameter that describes how much of a wave is reflected by an impedance discontinuity in the transmission medium. It is equal to the ratio of the amplitude of the reflected wave to the incident wave, with each expressed as phasors. For example, it is used in optics to calculate the amount of light that is reflected from a surface with a different index of refraction, such as a glass surface, or in an electrical transmission line to calculate how much of the electromagnetic wave is reflected by an impedance discontinuity. The reflection coefficient is closely related to the transmission coefficient. The reflectance of a system is also sometimes called a reflection coefficient. Different specialties have different applications for the term. Transmission lines In telecommunications and transmission line theory, the reflection coefficient is the ratio of the complex amplitude of the reflected wave to that of the incident wave. The voltage and current at any point along a transmission line can always be resolved into forward and reflected traveling waves given a specified reference impedance Z0. The reference impedance used is typically the characteristic impedance of a transmission line that's involved, but one can speak of reflection coefficient without any actual transmission line being present. In terms of the forward and reflected waves determined by the voltage and current, the reflection coefficient is defined as the complex ratio of the voltage of the reflected wave () to that of the incident wave (). This is typically represented with a (capital gamma) and can be written as: It can as well be defined using the currents associated with the reflected and forward waves, but introducing a minus sign to account for the opposite orientations of the two currents: The reflection coefficient may also be established using other field or circuit pairs of quantities whose product defines power resolvable into a forward and reverse wave. For instance, with electromagnetic plane waves, one uses the ratio of the electric fields of the reflected to that of the forward wave (or magnetic fields, again with a minus sign); the ratio of each wave's electric field E to its magnetic field H is again an impedance Z0 (equal to the impedance of free space in a vacuum). Similarly in acoustics one uses the acoustic pressure and velocity respectively. In the accompanying figure, a signal source with internal impedance possibly followed by a transmission line of characteristic impedance is represented by its Thévenin equivalent, driving the load . For a real (resistive) source impedance , if we define using the reference impedance then the source's maximum power is delivered to a load , in which case implying no reflected power. More generally, the squared-magnitude of the reflection coefficient denotes the proportion of that power that is reflected back to the source, with the power actually delivered toward the load being . Anywhere along an intervening (lossless) transmission line of characteristic impedance , the magnitude of the reflection coefficient will remain the same (the powers of the forward and reflected waves stay the same) but with a different phase. In the case of a short circuited load (), one finds at the load. This implies the reflected wave having a 180° phase shift (phase reversal) with the voltages of the two waves being opposite at that point and adding to zero (as a short circuit demands). Relation to load impedance The reflection coefficient is determined by the load impedance at the end of the transmission line, as well as the characteristic impedance of the line. A load impedance of terminating a line with a characteristic impedance of will have a reflection coefficient of This is the coefficient at the load. The reflection coefficient can also be measured at other points on the line. The magnitude of the reflection coefficient in a lossless transmission line is constant along the line (as are the powers in the forward and reflected waves). However its phase will be shifted by an amount dependent on the electrical distance from the load. If the coefficient is measured at a point meters from the load, so the electrical distance from the load is radians, the coefficient at that point will be Note that the phase of the reflection coefficient is changed by twice the phase length of the attached transmission line. That is to take into account not only the phase delay of the reflected wave, but the phase shift that had first been applied to the forward wave, with the reflection coefficient being the quotient of these. The reflection coefficient so measured, , corresponds to an impedance which is generally dissimilar to present at the far side of the transmission line. The complex reflection coefficient (in the region , corresponding to passive loads) may be displayed graphically using a Smith chart. The Smith chart is a polar plot of , therefore the magnitude of is given directly by the distance of a point to the center (with the edge of the Smith chart corresponding to ). Its evolution along a transmission line is likewise described by a rotation of around the chart's center. Using the scales on a Smith chart, the resulting impedance (normalized to ) can directly be read. Before the advent of modern electronic computers, the Smith chart was of particular use as a sort of analog computer for this purpose. Standing wave ratio The standing wave ratio (SWR) is determined solely by the magnitude of the reflection coefficient: Along a lossless transmission line of characteristic impedance Z0, the SWR signifies the ratio of the voltage (or current) maxima to minima (or what it would be if the transmission line were long enough to produce them). The above calculation assumes that has been calculated using Z0 as the reference impedance. Since it uses only the magnitude of , the SWR intentionally ignores the specific value of the load impedance ZL responsible for it, but only the magnitude of the resulting impedance mismatch. That SWR remains the same wherever measured along a transmission line (looking towards the load) since the addition of a transmission line length to a load only changes the phase, not magnitude of . While having a one-to-one correspondence with reflection coefficient, SWR is the most commonly used figure of merit in describing the mismatch affecting a radio antenna or antenna system. It is most often measured at the transmitter side of a transmission line, but having, as explained, the same value as would be measured at the antenna (load) itself. Seismology Reflection coefficient is used in feeder testing for reliability of medium. Optics and microwaves In optics and electromagnetics in general, reflection coefficient can refer to either the amplitude reflection coefficient described here, or the reflectance, depending on context. Typically, the reflectance is represented by a capital R, while the amplitude reflection coefficient is represented by a lower-case r. These related concepts are covered by Fresnel equations in classical optics. Acoustics Acousticians use reflection coefficients to understand the effect of different materials on their acoustic environments.
Physical sciences
Optics
Physics
41660
https://en.wikipedia.org/wiki/Resonance
Resonance
Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration that matches its natural frequency. When this happens, the object or system absorbs energy from the external force and starts vibrating with a larger amplitude. Resonance can occur in various systems, such as mechanical, electrical, or acoustic systems, and it is often desirable in certain applications, such as musical instruments or radio receivers. However, resonance can also be detrimental, leading to excessive vibrations or even structural failure in some cases. All systems, including molecular systems and particles, tend to vibrate at a natural frequency depending upon their structure; this frequency is known as a resonant frequency or resonance frequency. When an oscillating force, an external vibration, is applied at a resonant frequency of a dynamic system, object, or particle, the outside vibration will cause the system to oscillate at a higher amplitude (with more force) than when the same force is applied at other, non-resonant frequencies. The resonant frequencies of a system can be identified when the response to an external vibration creates an amplitude that is a relative maximum within the system. Small periodic forces that are near a resonant frequency of the system have the ability to produce large amplitude oscillations in the system due to the storage of vibrational energy. Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, orbital resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR) and resonance of quantum wave functions. Resonant systems can be used to generate vibrations of a specific frequency (e.g., musical instruments), or pick out specific frequencies from a complex vibration containing many frequencies (e.g., filters). The term resonance (from Latin resonantia, 'echo', from resonare, 'resound') originated from the field of acoustics, particularly the sympathetic resonance observed in musical instruments, e.g., when one string starts to vibrate and produce sound after a different one is struck. Overview Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a simple pendulum). However, there are some losses from cycle to cycle, called damping. When damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple and distinct resonant frequencies. Examples A familiar example is a playground swing, which acts as a pendulum. Pushing a person in a swing in time with the natural interval of the swing (its resonant frequency) makes the swing go higher and higher (maximum amplitude), while attempts to push the swing at a faster or slower tempo produce smaller arcs. This is because the energy the swing absorbs is maximized when the pushes match the swing's natural oscillations. Resonance occurs widely in nature, and is exploited in many devices. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. For example, when hard objects like metal, glass, or wood are struck, there are brief resonant vibrations in the object. Light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. Other examples of resonance include: Timekeeping mechanisms of modern clocks and watches, e.g., the balance wheel in a mechanical watch and the quartz crystal in a quartz watch Tidal resonance of the Bay of Fundy Acoustic resonances of musical instruments and the human vocal tract Shattering of a crystal wineglass when exposed to a musical tone of the right pitch (its resonant frequency) Friction idiophones, such as making a glass object (glass, bottle, vase) vibrate by rubbing around its rim with a fingertip Electrical resonance of tuned circuits in radios and TVs that allow radio frequencies to be selectively received Creation of coherent light by optical resonance in a laser cavity Orbital resonance as exemplified by some moons of the Solar System's giant planets and resonant groups such as the plutinos Material resonances in atomic scale are the basis of several spectroscopic techniques that are used in condensed matter physics Electron spin resonance Mössbauer effect Nuclear magnetic resonance Linear systems Resonance manifests itself in many linear and nonlinear systems as oscillations around an equilibrium point. When the system is driven by a sinusoidal external input, a measured output of the system may oscillate in response. The ratio of the amplitude of the output's steady-state oscillations to the input's oscillations is called the gain, and the gain can be a function of the frequency of the sinusoidal external input. Peaks in the gain at certain frequencies correspond to resonances, where the amplitude of the measured output's oscillations are disproportionately large. Since many linear and nonlinear systems that oscillate are modeled as harmonic oscillators near their equilibria, a derivation of the resonant frequency for a driven, damped harmonic oscillator is shown. An RLC circuit is used to illustrate connections between resonance and a system's transfer function, frequency response, poles, and zeroes. Building off the RLC circuit example, these connections for higher-order linear systems with multiple inputs and outputs are generalized. The driven, damped harmonic oscillator Consider a damped mass on a spring driven by a sinusoidal, externally applied force. Newton's second law takes the form where m is the mass, x is the displacement of the mass from the equilibrium point, F0 is the driving amplitude, ω is the driving angular frequency, k is the spring constant, and c is the viscous damping coefficient. This can be rewritten in the form where is called the undamped angular frequency of the oscillator or the natural frequency, is called the damping ratio. Many sources also refer to ω0 as the resonant frequency. However, as shown below, when analyzing oscillations of the displacement x(t), the resonant frequency is close to but not the same as ω0. In general the resonant frequency is close to but not necessarily the same as the natural frequency. The RLC circuit example in the next section gives examples of different resonant frequencies for the same system. The general solution of Equation () is the sum of a transient solution that depends on initial conditions and a steady state solution that is independent of initial conditions and depends only on the driving amplitude F0, driving frequency ω, undamped angular frequency ω0, and the damping ratio ζ. The transient solution decays in a relatively short amount of time, so to study resonance it is sufficient to consider the steady state solution. It is possible to write the steady-state solution for x(t) as a function proportional to the driving force with an induced phase change φ, where The phase value is usually taken to be between −180° and 0 so it represents a phase lag for both positive and negative values of the arctan argument. Resonance occurs when, at certain driving frequencies, the steady-state amplitude of x(t) is large compared to its amplitude at other driving frequencies. For the mass on a spring, resonance corresponds physically to the mass's oscillations having large displacements from the spring's equilibrium position at certain driving frequencies. Looking at the amplitude of x(t) as a function of the driving frequency ω, the amplitude is maximal at the driving frequency ωr is the resonant frequency for this system. Again, the resonant frequency does not equal the undamped angular frequency ω0 of the oscillator. They are proportional, and if the damping ratio goes to zero they are the same, but for non-zero damping they are not the same frequency. As shown in the figure, resonance may also occur at other frequencies near the resonant frequency, including ω0, but the maximum response is at the resonant frequency. Also, ωr is only real and non-zero if , so this system can only resonate when the harmonic oscillator is significantly underdamped. For systems with a very small damping ratio and a driving frequency near the resonant frequency, the steady state oscillations can become very large. The pendulum For other driven, damped harmonic oscillators whose equations of motion do not look exactly like the mass on a spring example, the resonant frequency remains but the definitions of ω0 and ζ change based on the physics of the system. For a pendulum of length ℓ and small displacement angle θ, Equation () becomes and therefore RLC series circuits Consider a circuit consisting of a resistor with resistance R, an inductor with inductance L, and a capacitor with capacitance C connected in series with current i(t) and driven by a voltage source with voltage vin(t). The voltage drop around the circuit is Rather than analyzing a candidate solution to this equation like in the mass on a spring example above, this section will analyze the frequency response of this circuit. Taking the Laplace transform of Equation (), where I(s) and Vin(s) are the Laplace transform of the current and input voltage, respectively, and s is a complex frequency parameter in the Laplace domain. Rearranging terms, Voltage across the capacitor An RLC circuit in series presents several options for where to measure an output voltage. Suppose the output voltage of interest is the voltage drop across the capacitor. As shown above, in the Laplace domain this voltage is or Define for this circuit a natural frequency and a damping ratio, The ratio of the output voltage to the input voltage becomes H(s) is the transfer function between the input voltage and the output voltage. This transfer function has two poles–roots of the polynomial in the transfer function's denominator–at and no zeros–roots of the polynomial in the transfer function's numerator. Moreover, for , the magnitude of these poles is the natural frequency ω0 and that for , our condition for resonance in the harmonic oscillator example, the poles are closer to the imaginary axis than to the real axis. Evaluating H(s) along the imaginary axis , the transfer function describes the frequency response of this circuit. Equivalently, the frequency response can be analyzed by taking the Fourier transform of Equation () instead of the Laplace transform. The transfer function, which is also complex, can be written as a gain and phase, A sinusoidal input voltage at frequency ω results in an output voltage at the same frequency that has been scaled by G(ω) and has a phase shift Φ(ω). The gain and phase can be plotted versus frequency on a Bode plot. For the RLC circuit's capacitor voltage, the gain of the transfer function H(iω) is Note the similarity between the gain here and the amplitude in Equation (). Once again, the gain is maximized at the resonant frequency Here, the resonance corresponds physically to having a relatively large amplitude for the steady state oscillations of the voltage across the capacitor compared to its amplitude at other driving frequencies. Voltage across the inductor The resonant frequency need not always take the form given in the examples above. For the RLC circuit, suppose instead that the output voltage of interest is the voltage across the inductor. As shown above, in the Laplace domain the voltage across the inductor is using the same definitions for ω0 and ζ as in the previous example. The transfer function between Vin(s) and this new Vout(s) across the inductor is This transfer function has the same poles as the transfer function in the previous example, but it also has two zeroes in the numerator at . Evaluating H(s) along the imaginary axis, its gain becomes Compared to the gain in Equation () using the capacitor voltage as the output, this gain has a factor of ω2 in the numerator and will therefore have a different resonant frequency that maximizes the gain. That frequency is So for the same RLC circuit but with the voltage across the inductor as the output, the resonant frequency is now larger than the natural frequency, though it still tends towards the natural frequency as the damping ratio goes to zero. That the same circuit can have different resonant frequencies for different choices of output is not contradictory. As shown in Equation (), the voltage drop across the circuit is divided among the three circuit elements, and each element has different dynamics. The capacitor's voltage grows slowly by integrating the current over time and is therefore more sensitive to lower frequencies, whereas the inductor's voltage grows when the current changes rapidly and is therefore more sensitive to higher frequencies. While the circuit as a whole has a natural frequency where it tends to oscillate, the different dynamics of each circuit element make each element resonate at a slightly different frequency. Voltage across the resistor Suppose that the output voltage of interest is the voltage across the resistor. In the Laplace domain the voltage across the resistor is and using the same natural frequency and damping ratio as in the capacitor example the transfer function is This transfer function also has the same poles as the previous RLC circuit examples, but it only has one zero in the numerator at s = 0. For this transfer function, its gain is The resonant frequency that maximizes this gain is and the gain is one at this frequency, so the voltage across the resistor resonates at the circuit's natural frequency and at this frequency the amplitude of the voltage across the resistor equals the input voltage's amplitude. Antiresonance Some systems exhibit antiresonance that can be analyzed in the same way as resonance. For antiresonance, the amplitude of the response of the system at certain frequencies is disproportionately small rather than being disproportionately large. In the RLC circuit example, this phenomenon can be observed by analyzing both the inductor and the capacitor combined. Suppose that the output voltage of interest in the RLC circuit is the voltage across the inductor and the capacitor combined in series. Equation () showed that the sum of the voltages across the three circuit elements sums to the input voltage, so measuring the output voltage as the sum of the inductor and capacitor voltages combined is the same as vin minus the voltage drop across the resistor. The previous example showed that at the natural frequency of the system, the amplitude of the voltage drop across the resistor equals the amplitude of vin, and therefore the voltage across the inductor and capacitor combined has zero amplitude. We can show this with the transfer function. The sum of the inductor and capacitor voltages is Using the same natural frequency and damping ratios as the previous examples, the transfer function is This transfer has the same poles as the previous examples but has zeroes at Evaluating the transfer function along the imaginary axis, its gain is Rather than look for resonance, i.e., peaks of the gain, notice that the gain goes to zero at ω = ω0, which complements our analysis of the resistor's voltage. This is called antiresonance, which has the opposite effect of resonance. Rather than result in outputs that are disproportionately large at this frequency, this circuit with this choice of output has no response at all at this frequency. The frequency that is filtered out corresponds exactly to the zeroes of the transfer function, which were shown in Equation () and were on the imaginary axis. Relationships between resonance and frequency response in the RLC series circuit example These RLC circuit examples illustrate how resonance is related to the frequency response of the system. Specifically, these examples illustrate: How resonant frequencies can be found by looking for peaks in the gain of the transfer function between the input and output of the system, for example in a Bode magnitude plot How the resonant frequency for a single system can be different for different choices of system output The connection between the system's natural frequency, the system's damping ratio, and the system's resonant frequency The connection between the system's natural frequency and the magnitude of the transfer function's poles, pointed out in Equation (), and therefore a connection between the poles and the resonant frequency A connection between the transfer function's zeroes and the shape of the gain as a function of frequency, and therefore a connection between the zeroes and the resonant frequency that maximizes gain A connection between the transfer function's zeroes and antiresonance The next section extends these concepts to resonance in a general linear system. Generalizing resonance and antiresonance for linear systems Next consider an arbitrary linear system with multiple inputs and outputs. For example, in state-space representation a third order linear time-invariant system with three inputs and two outputs might be written as where ui(t) are the inputs, xi(t) are the state variables, yi(t) are the outputs, and A, B, C, and D are matrices describing the dynamics between the variables. This system has a transfer function matrix whose elements are the transfer functions between the various inputs and outputs. For example, Each Hij(s) is a scalar transfer function linking one of the inputs to one of the outputs. The RLC circuit examples above had one input voltage and showed four possible output voltages–across the capacitor, across the inductor, across the resistor, and across the capacitor and inductor combined in series–each with its own transfer function. If the RLC circuit were set up to measure all four of these output voltages, that system would have a 4×1 transfer function matrix linking the single input to each of the four outputs. Evaluated along the imaginary axis, each Hij(iω) can be written as a gain and phase shift, Peaks in the gain at certain frequencies correspond to resonances between that transfer function's input and output, assuming the system is stable. Each transfer function Hij(s) can also be written as a fraction whose numerator and denominator are polynomials of s. The complex roots of the numerator are called zeroes, and the complex roots of the denominator are called poles. For a stable system, the positions of these poles and zeroes on the complex plane give some indication of whether the system can resonate or antiresonate and at which frequencies. In particular, any stable or marginally stable, complex conjugate pair of poles with imaginary components can be written in terms of a natural frequency and a damping ratio as as in Equation (). The natural frequency ω0 of that pole is the magnitude of the position of the pole on the complex plane and the damping ratio of that pole determines how quickly that oscillation decays. In general, Complex conjugate pairs of poles near the imaginary axis correspond to a peak or resonance in the frequency response in the vicinity of the pole's natural frequency. If the pair of poles is on the imaginary axis, the gain is infinite at that frequency. Complex conjugate pairs of zeroes near the imaginary axis correspond to a notch or antiresonance in the frequency response in the vicinity of the zero's frequency, i.e., the frequency equal to the magnitude of the zero. If the pair of zeroes is on the imaginary axis, the gain is zero at that frequency. In the RLC circuit example, the first generalization relating poles to resonance is observed in Equation (). The second generalization relating zeroes to antiresonance is observed in Equation (). In the examples of the harmonic oscillator, the RLC circuit capacitor voltage, and the RLC circuit inductor voltage, "poles near the imaginary axis" corresponds to the significantly underdamped condition ζ < 1/. Standing waves A physical system can have as many natural frequencies as it has degrees of freedom and can resonate near each of those natural frequencies. A mass on a spring, which has one degree of freedom, has one natural frequency. A double pendulum, which has two degrees of freedom, can have two natural frequencies. As the number of coupled harmonic oscillators increases, the time it takes to transfer energy from one to the next becomes significant. Systems with very large numbers of degrees of freedom can be thought of as continuous rather than as having discrete oscillators. Energy transfers from one oscillator to the next in the form of waves. For example, the string of a guitar or the surface of water in a bowl can be modeled as a continuum of small coupled oscillators and waves can travel along them. In many cases these systems have the potential to resonate at certain frequencies, forming standing waves with large-amplitude oscillations at fixed positions. Resonance in the form of standing waves underlies many familiar phenomena, such as the sound produced by musical instruments, electromagnetic cavities used in lasers and microwave ovens, and energy levels of atoms. Standing waves on a string When a string of fixed length is driven at a particular frequency, a wave propagates along the string at the same frequency. The waves reflect off the ends of the string, and eventually a steady state is reached with waves traveling in both directions. The waveform is the superposition of the waves. At certain frequencies, the steady state waveform does not appear to travel along the string. At fixed positions called nodes, the string is never displaced. Between the nodes the string oscillates and exactly halfway between the nodes–at positions called anti-nodes–the oscillations have their largest amplitude. For a string of length with fixed ends, the displacement of the string perpendicular to the -axis at time is where is the amplitude of the left- and right-traveling waves interfering to form the standing wave, is the wave number, is the frequency. The frequencies that resonate and form standing waves relate to the length of the string as where is the speed of the wave and the integer denotes different modes or harmonics. The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. The possible modes of oscillation form a harmonic series. Resonance in complex networks A generalization to complex networks of coupled harmonic oscillators shows that such systems have a finite number of natural resonant frequencies, related to the topological structure of the network itself. In particular, such frequencies result related to the eigenvalues of the network's Laplacian matrix. Let be the adjacency matrix describing the topological structure of the network and the corresponding Laplacian matrix, where is the diagonal matrix of the degrees of the network's nodes. Then, for a network of classical and identical harmonic oscillators, when a sinusoidal driving force is applied to a specific node, the global resonant frequencies of the network are given by where are the eigenvalues of the Laplacian . Types Mechanical Mechanical resonance is the tendency of a mechanical system to absorb more energy when the frequency of its oscillations matches the system's natural frequency of vibration than it does at other frequencies. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, and aircraft. When designing objects, engineers must ensure the mechanical resonance frequencies of the component parts do not match driving vibrational frequencies of motors or other oscillating parts, a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower, and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies and thus dissipate the absorbed energy. The Taipei 101 building relies on a —a tuned mass damper—to cancel resonance. Furthermore, the structure is designed to resonate at a frequency that does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. In addition, engineers designing objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. The cadence of runners has been hypothesized to be energetically favorable due to resonance between the elastic energy stored in the lower limb and the mass of the runner. International Space Station The rocket engines for the International Space Station (ISS) are controlled by an autopilot. Ordinarily, uploaded parameters for controlling the engine control system for the Zvezda module make the rocket engines boost the International Space Station to a higher orbit. The rocket engines are hinge-mounted, and ordinarily the crew does not notice the operation. On January 14, 2009, however, the uploaded parameters made the autopilot swing the rocket engines in larger and larger oscillations, at a frequency of 0.5 Hz. These oscillations were captured on video, and lasted for 142 seconds. Acoustic Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the frequency range of human hearing, in other words sound. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz), Many objects and materials act as resonators with resonant frequencies within this range, and when struck vibrate mechanically, pushing on the surrounding air to create sound waves. This is the source of many percussive sounds we hear. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of, and tension on, a drum membrane. Like mechanical resonance, acoustic resonance can result in catastrophic failure of the object at resonance. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass, although this is difficult in practice. Electrical Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedance of the circuit is at a minimum in a series circuit or at maximum in a parallel circuit (usually when the transfer function peaks in absolute value). Resonance in circuits are used for both transmitting and receiving wireless communications such as television, cell phones and radio. Optical An optical cavity, also called an optical resonator, is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times producing standing waves for certain resonant frequencies. The standing wave patterns produced are called "modes". Longitudinal modes differ only in frequency while transverse modes differ for different frequencies and have different intensity patterns across the cross-section of the beam. Ring resonators and whispering galleries are examples of optical resonators that do not form standing waves. Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them; flat mirrors are not often used because of the difficulty of aligning them precisely. The geometry (resonator type) must be chosen so the beam remains stable, i.e., the beam size does not continue to grow with each reflection. Resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point (and therefore intense light at that point) inside the cavity. Optical cavities are designed to have a very large Q factor. A beam reflects a large number of times with little attenuation—therefore the frequency line width of the beam is small compared to the frequency of the laser. Additional optical resonances are guided-mode resonances and surface plasmon resonance, which result in anomalous reflection and high evanescent fields at resonance. In this case, the resonant modes are guided modes of a waveguide or surface plasmon modes of a dielectric-metallic interface. These modes are usually excited by a subwavelength grating. Orbital In celestial mechanics, an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies. In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self-correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa, and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet. Atomic, particle, and molecular Nuclear magnetic resonance (NMR) is the name given to a physical resonance phenomenon involving the observation of specific quantum mechanical magnetic properties of an atomic nucleus in the presence of an applied, external magnetic field. Many scientific techniques exploit NMR phenomena to study molecular physics, crystals, and non-crystalline materials through NMR spectroscopy. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). All nuclei containing odd numbers of nucleons have an intrinsic magnetic moment and angular momentum. A key feature of NMR is that the resonant frequency of a particular substance is directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonant frequencies of the sample's nuclei depend on where in the field they are located. Therefore, the particle can be located quite precisely by its resonant frequency. Electron paramagnetic resonance, otherwise known as electron spin resonance (ESR), is a spectroscopic technique similar to NMR, but uses unpaired electrons instead. Materials for which this can be applied are much more limited since the material needs to both have an unpaired spin and be paramagnetic. The Mössbauer effect is the resonant and recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form. Resonance in particle physics appears in similar circumstances to classical physics at the level of quantum mechanics and quantum field theory. Resonances can also be thought of as unstable particles, with the formula in the Universal resonance curve section of this article applying if Γ is the particle's decay rate and Ω is the particle's mass M. In that case, the formula comes from the particle's propagator, with its mass replaced by the complex number M + iΓ. The formula is further related to the particle's decay rate by the optical theorem. Disadvantages A column of soldiers marching in regular step on a narrow and structurally flexible bridge can set it into dangerously large amplitude oscillations. On April 12, 1831, the Broughton Suspension Bridge near Salford, England collapsed while a group of British soldiers were marching across. Since then, the British Army has had a standing order for soldiers to break stride when marching across bridges, to avoid resonance from their regular marching pattern affecting the bridge. Vibrations of a motor or engine can induce resonant vibration in its supporting structures if their natural frequency is close to that of the vibrations of the engine. A common example is the rattling sound of a bus body when the engine is left idling. Structural resonance of a suspension bridge induced by winds can lead to its catastrophic collapse. Several early suspension bridges in Europe and United States were destroyed by structural resonance induced by modest winds. The collapse of the Tacoma Narrows Bridge on 7 November 1940 is characterized in physics as a classic example of resonance. It has been argued by Robert H. Scanlan and others that the destruction was instead caused by aeroelastic flutter, a complicated interaction between the bridge and the winds passing through it—an example of a self oscillation, or a kind of "self-sustaining vibration" as referred to in the nonlinear theory of vibrations. Q factor The Q factor or quality factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is, and characterizes the bandwidth of a resonator relative to its center frequency. A high value for Q indicates a lower rate of energy loss relative to the stored energy, i.e., the system is lightly damped. The parameter is defined by the equation: . The higher the Q factor, the greater the amplitude at the resonant frequency, and the smaller the bandwidth, or range of frequencies around resonance occurs. In electrical resonance, a high-Q circuit in a radio receiver is more difficult to tune, but has greater selectivity, and so would be better at filtering out signals from other stations. High Q oscillators are more stable. Examples that normally have a low Q factor include door closers (Q=0.5). Systems with high Q factors include tuning forks (Q=1000), atomic clocks and lasers (Q≈1011). Universal resonance curve The exact response of a resonance, especially for frequencies far from the resonant frequency, depends on the details of the physical system, and is usually not exactly symmetric about the resonant frequency, as illustrated for the simple harmonic oscillator above. For a lightly damped linear oscillator with a resonance frequency , the intensity of oscillations when the system is driven with a driving frequency is typically approximated by the following formula that is symmetric about the resonance frequency: Where the susceptibility links the amplitude of the oscillator to the driving force in frequency space: The intensity is defined as the square of the amplitude of the oscillations. This is a Lorentzian function, or Cauchy distribution, and this response is found in many physical situations involving resonant systems. is a parameter dependent on the damping of the oscillator, and is known as the linewidth of the resonance. Heavily damped oscillators tend to have broad linewidths, and respond to a wider range of driving frequencies around the resonant frequency. The linewidth is inversely proportional to the Q factor, which is a measure of the sharpness of the resonance. In radio engineering and electronics engineering, this approximate symmetric response is known as the universal resonance curve, a concept introduced by Frederick E. Terman in 1932 to simplify the approximate analysis of radio circuits with a range of center frequencies and Q values.
Physical sciences
Waves
null
41698
https://en.wikipedia.org/wiki/Shield
Shield
A shield is a piece of personal armour held in the hand, which may or may not be strapped to the wrist or forearm. Shields are used to intercept specific attacks, whether from close-ranged weaponry like spears or long ranged projectiles such as arrows. They function as means of active blocks, as well as to provide passive protection by closing one or more lines of engagement during combat. Shields vary greatly in size and shape, ranging from large panels that protect the user's whole body to small models (such as the buckler) that were intended for hand-to-hand-combat use. Shields also vary a great deal in thickness; whereas some shields were made of relatively deep, absorbent, wooden planking to protect soldiers from the impact of spears and crossbow bolts, others were thinner and lighter and designed mainly for deflecting blade strikes (like the roromaraugi or qauata). Finally, shields vary greatly in shape, ranging in roundness to angularity, proportional length and width, symmetry and edge pattern; different shapes provide more optimal protection for infantry or cavalry, enhance portability, provide secondary uses such as ship protection or as a weapon and so on. In prehistory and during the era of the earliest civilisations, shields were made of wood, animal hide, woven reeds or wicker. In classical antiquity, the Barbarian Invasions and the Middle Ages, they were normally constructed of poplar tree, lime or another split-resistant timber, covered in some instances with a material such as leather or rawhide and often reinforced with a metal boss, rim or banding. They were carried by foot soldiers, knights and cavalry. Depending on time and place, shields could be round, oval, square, rectangular, triangular, bilabial or scalloped. Sometimes they took on the form of kites or flatirons, or had rounded tops on a rectangular base with perhaps an eye-hole, to look through when used with combat. The shield was held by a central grip or by straps with some going over or around the user's arm and one or more being held by the hand. Often shields were decorated with a painted pattern or an animal representation to show their army or clan. It was common for Aristocratic officials such and knights, barons, dukes, and kings to have their shields painted with customary designs known as a coat of arms. These designs developed into systematized heraldic devices during the High Middle Ages for purposes of battlefield identification. Even after the introduction of gunpowder and firearms to the battlefield, shields continued to be used by certain groups. In the 18th century, for example, Scottish Highland fighters liked to wield small shields known as targes, and as late as the 19th century, some non-industrialized peoples (such as Zulu warriors) employed them when waging wars. In the 20th and 21st century, shields have been used by military and police units that specialize in anti-terrorist actions, hostage rescue, riot control and siege-breaking. History Prehistory The first prototype of the shield was believed to be created in the Late Neolithic Age. However the oldest surviving shields date to sometime in the Bronze Age. The oldest form of shield was a protection device designed to block attacks by hand weapons, such as swords, axes and maces, or ranged weapons like sling-stones and arrows. Shields have varied greatly in construction over time and place. Sometimes shields were made of metal, but wood or animal hide construction was much more common; wicker and even turtle shells have been used. Many surviving examples of metal shields are generally felt to be ceremonial rather than practical, for example the Yetholm-type shields of the Bronze Age, or the Iron Age Battersea shield. Ancient Size and weight varied greatly. Lightly armored warriors relying on speed and surprise would generally carry light shields (pelte) that were either small or thin. Heavy troops might be equipped with robust shields that could cover most of the body. Many had a strap called a guige that allowed them to be slung over the user's back when not in use or on horseback. During the 14th–13th century BC, the Sards or Shardana, working as mercenaries for the Egyptian pharaoh Ramses II, utilized either large or small round shields against the Hittites. The Mycenaean Greeks used two types of shields: the "figure-of-eight" shield and a rectangular "tower" shield. These shields were made primarily from a wicker frame and then reinforced with leather. Covering the body from head to foot, the figure-of-eight and tower shield offered most of the warrior's body a good deal of protection in hand-to-hand combat. The Ancient Greek hoplites used a round, bowl-shaped wooden shield that was reinforced with bronze and called an aspis. The aspis was also the longest-lasting and most famous and influential of all of the ancient Greek shields. The Spartans used the aspis to create the Greek phalanx formation. Their shields offered protection not only for themselves but for their comrades to their left. Examples of Germanic wooden shields circa 350 BC – 500 AD survive from weapons sacrifices in Danish bogs. The heavily armored Roman legionaries carried large shields (scuta) that could provide far more protection, but made swift movement a little more difficult. The scutum originally had an oval shape, but gradually the curved tops and sides were cut to produce the familiar rectangular shape most commonly seen in the early Imperial legions. Famously, the Romans used their shields to create a tortoise-like formation called a testudo in which entire groups of soldiers would be enclosed in an armoured box to provide protection against missiles. Many ancient shield designs featured incuts of one sort or another. This was done to accommodate the shaft of a spear, thus facilitating tactics requiring the soldiers to stand close together forming a wall of shields. Post-classical Typical in the early European Middle Ages were round shields with light, non-splitting wood like linden, fir, alder, or poplar, usually reinforced with leather cover on one or both sides and occasionally metal rims, encircling a metal shield boss. These light shields suited a fighting style where each incoming blow is intercepted with the boss in order to deflect it. The Normans introduced the kite shield around the 10th century, which was rounded at the top and tapered at the bottom. This gave some protection to the user's legs, without adding too much to the total weight of the shield. The kite shield predominantly features enarmes, leather straps used to grip the shield tight to the arm. Used by foot and mounted troops alike, it gradually came to replace the round shield as the common choice until the end of the 12th century, when more efficient limb armour allowed the shields to grow shorter, and be entirely replaced by the 14th century. As body armour improved, knight's shields became smaller, leading to the familiar heater shield style. Both kite and heater style shields were made of several layers of laminated wood, with a gentle curve in cross section. The heater style inspired the shape of the symbolic heraldic shield that is still used today. Eventually, specialised shapes were developed such as the bouche, which had a lance rest cut into the upper corner of the lance side, to help guide it in combat or tournament. Free standing shields called pavises, which were propped up on stands, were used by medieval crossbowmen who needed protection while reloading. In time, some armoured foot knights gave up shields entirely in favour of mobility and two-handed weapons. Other knights and common soldiers adopted the buckler, giving rise to the term "swashbuckler". The buckler is a small round shield, typically between 8 and 16 inches (20–40 cm) in diameter. The buckler was one of very few types of shield that were usually made of metal. Small and light, the buckler was easily carried by being hung from a belt; it gave little protection from missiles and was reserved for hand-to-hand combat where it served both for protection and offence. The buckler's use began in the Middle Ages and continued well into the 16th century. In Italy, the targa, parma, and rotella were used by common people, fencers and even knights. The development of plate armour made shields less and less common as it eliminated the need for a shield. Lightly armoured troops continued to use shields after men-at-arms and knights ceased to use them. Shields continued in use even after gunpowder powered weapons made them essentially obsolete on the battlefield. In the 18th century, the Scottish clans used a small, round targe that was partially effective against the firearms of the time, although it was arguably more often used against British infantry bayonets and cavalry swords in close-in fighting. During the 19th century, non-industrial cultures with little access to guns were still using war shields. Zulu warriors carried large lightweight shields called Ishlangu made from a single ox hide supported by a wooden spine. This was used in combination with a short spear (iklwa) and/or club. Other African shields include Glagwa from Cameroon or Nguba from Congo. Modern Law enforcement shields Shields for protection from armed attack are still used by many police forces around the world. These modern shields are usually intended for two broadly distinct purposes. The first type, riot shields, are used for riot control and can be made from metal or polymers such as polycarbonate Lexan or Makrolon or boPET Mylar. These typically offer protection from relatively large and low velocity projectiles, such as rocks and bottles, as well as blows from fists or clubs. Synthetic riot shields are normally transparent, allowing full use of the shield without obstructing vision. Similarly, metal riot shields often have a small window at eye level for this purpose. These riot shields are most commonly used to block and push back crowds when the users stand in a "wall" to block protesters, and to protect against shrapnel, projectiles like stones and bricks, molotov cocktails, and during hand-to-hand combat. The second type of modern police shield is the bullet-resistant ballistic shield, also called tactical shield. These shields are typically manufactured from advanced synthetics such as Kevlar and are designed to be bulletproof, or at least bullet resistant. Two types of shields are available: Light level IIIA shields are designed to stop pistol cartridges. Heavy level III and IV shields are designed to stop rifle cartridges. Tactical shields often have a firing port so that the officer holding the shield can fire a weapon while being protected by the shield, and they often have a bulletproof glass viewing port. They are typically employed by specialist police, such as SWAT teams in high risk entry and siege scenarios, such as hostage rescue and breaching gang compounds, as well as in antiterrorism operations. Law enforcement shields often have a large signs stating "POLICE" (or the name of a force, such as "US MARSHALS") to indicate that the user is a law enforcement officer. Gallery List Aspis Ballistic shield Battersea Shield Buckler Escutcheon (heraldic shield) Glagwa Heater shield Kite shield Nguni shield Pavise Qauata Riot shield Roromaraugi Scutum (shield) Shield boss Targe Yetholm-type shields Human shield Component Enarmes Guige Tactics Phalanx Schiltron Shield wall Testudo formation
Technology
Armour
null
41706
https://en.wikipedia.org/wiki/Signal-to-noise%20ratio
Signal-to-noise ratio
Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. SNR is an important parameter that affects the performance and quality of systems that process or transmit signals, such as communication systems, audio systems, radar systems, imaging systems, and data acquisition systems. A high SNR means that the signal is clear and easy to detect or interpret, while a low SNR means that the signal is corrupted or obscured by noise and may be difficult to distinguish or recover. SNR can be improved by various methods, such as increasing the signal strength, reducing the noise level, filtering out unwanted noise, or using error correction techniques. SNR also determines the maximum possible amount of data that can be transmitted reliably over a given channel, which depends on its bandwidth and SNR. This relationship is described by the Shannon–Hartley theorem, which is a fundamental law of information theory. SNR can be calculated using different formulas depending on how the signal and noise are measured and defined. The most common way to express SNR is in decibels, which is a logarithmic scale that makes it easier to compare large or small values. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. Definition One definition of signal-to-noise ratio is the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): where is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. The signal-to-noise ratio of a random variable () to random noise is: where E refers to the expected value, which in this case is the mean square of . If the signal is simply a constant value of , this equation simplifies to: If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . The signal and the noise must be measured the same way, for example as voltages across the same impedance. Their root mean squares can alternatively be used according to: where is root mean square (RMS) amplitude (for example, RMS voltage). Decibels Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as and In a similar manner, SNR may be expressed in decibels as Using the definition of SNR Using the quotient rule for logarithms Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: In the above formula, P is measured in units of power, such as watts (W) or milliwatts (mW), and the signal-to-noise ratio is a pure number. However, when the signal and noise are measured in volts (V) or amperes (A), which are measures of amplitude, they must first be squared to obtain a quantity proportional to power, as shown below: Dynamic range The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernible signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Difference from conventional power In physics, the average power of an AC signal is defined as the average value of voltage times current; for resistive (non-reactive) circuits, where voltage and current are in phase, this is equivalent to the product of the rms voltage and current: But in signal processing and communication, one usually assumes that so that factor is usually not included while measuring power or energy of a signal. This may cause some confusion among readers, but the resistance factor is not significant for typical operations performed in signal processing, or for computing power ratios. For most cases, the power of a signal would be considered to be simply Alternative definition An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement: where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance), and it is only an approximation since . It is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above, in which case it is equivalent to the more common definition: This definition is closely related to the sensitivity index or d, when assuming that the signal has two states separated by signal amplitude , and the noise standard deviation does not change between the two states. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features with certainty. An SNR less than 5 means less than 100% certainty in identifying image details. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Related measures are the "contrast ratio" and the "contrast-to-noise ratio". Modulation system measurements Amplitude modulation Channel signal-to-noise ratio is given by where W is the bandwidth and is modulation index Output signal-to-noise ratio (of AM receiver) is given by Frequency modulation Channel signal-to-noise ratio is given by Output signal-to-noise ratio is given by Noise reduction All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, the gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Internal electronic noise of measurement systems can be reduced through the use of low-noise amplifiers. When the characteristics of the noise are known and are different from the signal, it is possible to use a filter to reduce the noise. For example, a lock-in amplifier can extract a narrow bandwidth signal from broadband noise a million times stronger. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurements. In this case the noise goes down as the square root of the number of averaged samples. Digital signals When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Fixed point For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then: This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB. Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n-bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent: The dynamic range is much larger than fixed-point but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. Optical signals Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer. Types and abbreviations Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio. Other uses While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. The term is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as that interferes with the of appropriate discussion. SNR can also be applied in marketing and how business professionals manage information overload. Managing a healthy signal to noise ratio can help business executives improve their KPIs (Key Performance Indicators). Similar concepts The signal-to-noise ratio is similar to Cohen's d given by the difference of estimated means divided by the standard deviation of the data and is related to the test statistic in the t-test.
Technology
Basics_4
null
41741
https://en.wikipedia.org/wiki/Standing%20wave
Standing wave
In physics, a standing wave, also known as a stationary wave, is a wave that oscillates in time but whose peak amplitude profile does not move in space. The peak amplitude of the wave oscillations at any point in space is constant with respect to time, and the oscillations at different points throughout the wave are in phase. The locations at which the absolute value of the amplitude is minimum are called nodes, and the locations where the absolute value of the amplitude is maximum are called antinodes. Standing waves were first described scientifically by Michael Faraday in 1831. Faraday observed standing waves on the surface of a liquid in a vibrating container. Franz Melde coined the term "standing wave" (German: stehende Welle or Stehwelle) around 1860 and demonstrated the phenomenon in his classic experiment with vibrating strings. This phenomenon can occur because the medium is moving in the direction opposite to the movement of the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions. The most common cause of standing waves is the phenomenon of resonance, in which standing waves occur inside a resonator due to interference between waves reflected back and forth at the resonator's resonant frequency. For waves of equal amplitude traveling in opposing directions, there is on average no net propagation of energy. Moving medium As an example of the first type, under certain meteorological conditions standing waves form in the atmosphere in the lee of mountain ranges. Such waves are often exploited by glider pilots. Standing waves and hydraulic jumps also form on fast flowing river rapids and tidal currents such as the Saltstraumen maelstrom. A requirement for this in river currents is a flowing water with shallow depth in which the inertia of the water overcomes its gravity due to the supercritical flow speed (Froude number: 1.7 – 4.5, surpassing 4.5 results in direct standing wave) and is therefore neither significantly slowed down by the obstacle nor pushed to the side. Many standing river waves are popular river surfing breaks. Opposing waves As an example of the second type, a standing wave in a transmission line is a wave in which the distribution of current, voltage, or field strength is formed by the superposition of two waves of the same frequency propagating in opposite directions. The effect is a series of nodes (zero displacement) and anti-nodes (maximum displacement) at fixed points along the transmission line. Such a standing wave may be formed when a wave is transmitted into one end of a transmission line and is reflected from the other end by an impedance mismatch, i.e., discontinuity, such as an open circuit or a short. The failure of the line to transfer power at the standing wave frequency will usually result in attenuation distortion. In practice, losses in the transmission line and other components mean that a perfect reflection and a pure standing wave are never achieved. The result is a partial standing wave, which is a superposition of a standing wave and a traveling wave. The degree to which the wave resembles either a pure standing wave or a pure traveling wave is measured by the standing wave ratio (SWR). Another example is standing waves in the open ocean formed by waves with the same wave period moving in opposite directions. These may form near storm centres, or from reflection of a swell at the shore, and are the source of microbaroms and microseisms. Mathematical description This section considers representative one- and two-dimensional cases of standing waves. First, an example of an infinite length string shows how identical waves traveling in opposite directions interfere to produce standing waves. Next, two finite length string examples with different boundary conditions demonstrate how the boundary conditions restrict the frequencies that can form standing waves. Next, the example of sound waves in a pipe demonstrates how the same principles can be applied to longitudinal waves with analogous boundary conditions. Standing waves can also occur in two- or three-dimensional resonators. With standing waves on two-dimensional membranes such as drumheads, illustrated in the animations above, the nodes become nodal lines, lines on the surface at which there is no movement, that separate regions vibrating with opposite phase. These nodal line patterns are called Chladni figures. In three-dimensional resonators, such as musical instrument sound boxes and microwave cavity resonators, there are nodal surfaces. This section includes a two-dimensional standing wave example with a rectangular boundary to illustrate how to extend the concept to higher dimensions. Standing wave on an infinite length string To begin, consider a string of infinite length along the x-axis that is free to be stretched transversely in the y direction. For a harmonic wave traveling to the right along the string, the string's displacement in the y direction as a function of position x and time t is The displacement in the y-direction for an identical harmonic wave traveling to the left is where ymax is the amplitude of the displacement of the string for each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. For identical right- and left-traveling waves on the same string, the total displacement of the string is the sum of yR and yL, Using the trigonometric sum-to-product identity , Equation () does not describe a traveling wave. At any position x, y(x,t) simply oscillates in time with an amplitude that varies in the x-direction as . The animation at the beginning of this article depicts what is happening. As the left-traveling blue wave and right-traveling green wave interfere, they form the standing red wave that does not travel and instead oscillates in place. Because the string is of infinite length, it has no boundary condition for its displacement at any point along the x-axis. As a result, a standing wave can form at any frequency. At locations on the x-axis that are even multiples of a quarter wavelength, the amplitude is always zero. These locations are called nodes. At locations on the x-axis that are odd multiples of a quarter wavelength the amplitude is maximal, with a value of twice the amplitude of the right- and left-traveling waves that interfere to produce this standing wave pattern. These locations are called anti-nodes. The distance between two consecutive nodes or anti-nodes is half the wavelength, λ/2. Standing wave on a string with two fixed ends Next, consider a string with fixed ends at and . The string will have some damping as it is stretched by traveling waves, but assume the damping is very small. Suppose that at the fixed end a sinusoidal force is applied that drives the string up and down in the y-direction with a small amplitude at some frequency f. In this situation, the driving force produces a right-traveling wave. That wave reflects off the right fixed end and travels back to the left, reflects again off the left fixed end and travels back to the right, and so on. Eventually, a steady state is reached where the string has identical right- and left-traveling waves as in the infinite-length case and the power dissipated by damping in the string equals the power supplied by the driving force so the waves have constant amplitude. Equation () still describes the standing wave pattern that can form on this string, but now Equation () is subject to boundary conditions where at and because the string is fixed at and because we assume the driving force at the fixed end has small amplitude. Checking the values of y at the two ends, This boundary condition is in the form of the Sturm–Liouville formulation. The latter boundary condition is satisfied when . L is given, so the boundary condition restricts the wavelength of the standing waves to Waves can only form standing waves on this string if they have a wavelength that satisfies this relationship with L. If waves travel with speed v along the string, then equivalently the frequency of the standing waves is restricted to The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. Higher integer values of n correspond to modes of oscillation called harmonics or overtones. Any standing wave on the string will have n + 1 nodes including the fixed ends and n anti-nodes. To compare this example's nodes to the description of nodes for standing waves in the infinite length string, Equation () can be rewritten as In this variation of the expression for the wavelength, n must be even. Cross multiplying we see that because L is a node, it is an even multiple of a quarter wavelength, This example demonstrates a type of resonance and the frequencies that produce standing waves can be referred to as resonant frequencies. Standing wave on a string with one fixed end Next, consider the same string of length L, but this time it is only fixed at . At , the string is free to move in the y direction. For example, the string might be tied at to a ring that can slide freely up and down a pole. The string again has small damping and is driven by a small driving force at . In this case, Equation () still describes the standing wave pattern that can form on the string, and the string has the same boundary condition of at . However, at where the string can move freely there should be an anti-node with maximal amplitude of y. Equivalently, this boundary condition of the "free end" can be stated as at , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the motion of the "free end" will follow that of the point to its left. Reviewing Equation (), for the largest amplitude of y occurs when , or This leads to a different set of wavelengths than in the two-fixed-ends example. Here, the wavelength of the standing waves is restricted to Equivalently, the frequency is restricted to In this example n only takes odd values. Because L is an anti-node, it is an odd multiple of a quarter wavelength. Thus the fundamental mode in this example only has one quarter of a complete sine cycle–zero at and the first peak at –the first harmonic has three quarters of a complete sine cycle, and so on. This example also demonstrates a type of resonance and the frequencies that produce standing waves are called resonant frequencies. Standing wave in a pipe Consider a standing wave in a pipe of length L. The air inside the pipe serves as the medium for longitudinal sound waves traveling to the right or left through the pipe. While the transverse waves on the string from the previous examples vary in their displacement perpendicular to the direction of wave motion, the waves traveling through the air in the pipe vary in terms of their pressure and longitudinal displacement along the direction of wave motion. The wave propagates by alternately compressing and expanding air in segments of the pipe, which displaces the air slightly from its rest position and transfers energy to neighboring segments through the forces exerted by the alternating high and low air pressures. Equations resembling those for the wave on a string can be written for the change in pressure Δp due to a right- or left-traveling wave in the pipe. where pmax is the pressure amplitude or the maximum increase or decrease in air pressure due to each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. If identical right- and left-traveling waves travel through the pipe, the resulting superposition is described by the sum This formula for the pressure is of the same form as Equation (), so a stationary pressure wave forms that is fixed in space and oscillates in time. If the end of a pipe is closed, the pressure is maximal since the closed end of the pipe exerts a force that restricts the movement of air. This corresponds to a pressure anti-node (which is a node for molecular motions, because the molecules near the closed end cannot move). If the end of the pipe is open, the pressure variations are very small, corresponding to a pressure node (which is an anti-node for molecular motions, because the molecules near the open end can move freely). The exact location of the pressure node at an open end is actually slightly beyond the open end of the pipe, so the effective length of the pipe for the purpose of determining resonant frequencies is slightly longer than its physical length. This difference in length is ignored in this example. In terms of reflections, open ends partially reflect waves back into the pipe, allowing some energy to be released into the outside air. Ideally, closed ends reflect the entire wave back in the other direction. First consider a pipe that is open at both ends, for example an open organ pipe or a recorder. Given that the pressure must be zero at both open ends, the boundary conditions are analogous to the string with two fixed ends, which only occurs when the wavelength of standing waves is or equivalently when the frequency is where v is the speed of sound. Next, consider a pipe that is open at (and therefore has a pressure node) and closed at (and therefore has a pressure anti-node). The closed "free end" boundary condition for the pressure at can be stated as , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the pressure of the closed end will follow that of the point to its left. Examples of this setup include a bottle and a clarinet. This pipe has boundary conditions analogous to the string with only one fixed end. Its standing waves have wavelengths restricted to or equivalently the frequency of standing waves is restricted to For the case where one end is closed, n only takes odd values just like in the case of the string fixed at only one end. So far, the wave has been written in terms of its pressure as a function of position x and time. Alternatively, the wave can be written in terms of its longitudinal displacement of air, where air in a segment of the pipe moves back and forth slightly in the x-direction as the pressure varies and waves travel in either or both directions. The change in pressure Δp and longitudinal displacement s are related as where ρ is the density of the air. In terms of longitudinal displacement, closed ends of pipes correspond to nodes since air movement is restricted and open ends correspond to anti-nodes since the air is free to move. A similar, easier to visualize phenomenon occurs in longitudinal waves propagating along a spring. We can also consider a pipe that is closed at both ends. In this case, both ends will be pressure anti-nodes or equivalently both ends will be displacement nodes. This example is analogous to the case where both ends are open, except the standing wave pattern has a phase shift along the x-direction to shift the location of the nodes and anti-nodes. For example, the longest wavelength that resonates–the fundamental mode–is again twice the length of the pipe, except that the ends of the pipe have pressure anti-nodes instead of pressure nodes. Between the ends there is one pressure node. In the case of two closed ends, the wavelength is again restricted to and the frequency is again restricted to A Rubens tube provides a way to visualize the pressure variations of the standing waves in a tube with two closed ends. 2D standing wave with a rectangular boundary Next, consider transverse waves that can move along a two dimensional surface within a rectangular boundary of length Lx in the x-direction and length Ly in the y-direction. Examples of this type of wave are water waves in a pool or waves on a rectangular sheet that has been pulled taut. The waves displace the surface in the z-direction, with defined as the height of the surface when it is still. In two dimensions and Cartesian coordinates, the wave equation is where z(x,y,t) is the displacement of the surface, c is the speed of the wave. To solve this differential equation, let's first solve for its Fourier transform, with Taking the Fourier transform of the wave equation, This is an eigenvalue problem where the frequencies correspond to eigenvalues that then correspond to frequency-specific modes or eigenfunctions. Specifically, this is a form of the Helmholtz equation and it can be solved using separation of variables. Assume Dividing the Helmholtz equation by Z, This leads to two coupled ordinary differential equations. The x term equals a constant with respect to x that we can define as Solving for X(x), This x-dependence is sinusoidal–recalling Euler's formula–with constants Akx and Bkx determined by the boundary conditions. Likewise, the y term equals a constant with respect to y that we can define as and the dispersion relation for this wave is therefore Solving the differential equation for the y term, Multiplying these functions together and applying the inverse Fourier transform, z(x,y,t) is a superposition of modes where each mode is the product of sinusoidal functions for x, y, and t, The constants that determine the exact sinusoidal functions depend on the boundary conditions and initial conditions. To see how the boundary conditions apply, consider an example like the sheet that has been pulled taut where z(x,y,t) must be zero all around the rectangular boundary. For the x dependence, z(x,y,t) must vary in a way that it can be zero at both and for all values of y and t. As in the one dimensional example of the string fixed at both ends, the sinusoidal function that satisfies this boundary condition is with kx restricted to Likewise, the y dependence of z(x,y,t) must be zero at both and , which is satisfied by Restricting the wave numbers to these values also restricts the frequencies that resonate to If the initial conditions for z(x,y,0) and its time derivative ż(x,y,0) are chosen so the t-dependence is a cosine function, then standing waves for this system take the form So, standing waves inside this fixed rectangular boundary oscillate in time at certain resonant frequencies parameterized by the integers n and m. As they oscillate in time, they do not travel and their spatial variation is sinusoidal in both the x- and y-directions such that they satisfy the boundary conditions. The fundamental mode, and , has a single antinode in the middle of the rectangle. Varying n and m gives complicated but predictable two-dimensional patterns of nodes and antinodes inside the rectangle. From the dispersion relation, in certain situations different modes–meaning different combinations of n and m–may resonate at the same frequency even though they have different shapes for their x- and y-dependence. For example, if the boundary is square, , the modes and , and , and and all resonate at Recalling that ω determines the eigenvalue in the Helmholtz equation above, the number of modes corresponding to each frequency relates to the frequency's multiplicity as an eigenvalue. Standing wave ratio, phase, and energy transfer If the two oppositely moving traveling waves are not of the same amplitude, they will not cancel completely at the nodes, the points where the waves are 180° out of phase, so the amplitude of the standing wave will not be zero at the nodes, but merely a minimum. Standing wave ratio (SWR) is the ratio of the amplitude at the antinode (maximum) to the amplitude at the node (minimum). A pure standing wave will have an infinite SWR. It will also have a constant phase at any point in space (but it may undergo a 180° inversion every half cycle). A finite, non-zero SWR indicates a wave that is partially stationary and partially travelling. Such waves can be decomposed into a superposition of two waves: a travelling wave component and a stationary wave component. An SWR of one indicates that the wave does not have a stationary component – it is purely a travelling wave, since the ratio of amplitudes is equal to 1. A pure standing wave does not transfer energy from the source to the destination. However, the wave is still subject to losses in the medium. Such losses will manifest as a finite SWR, indicating a travelling wave component leaving the source to supply the losses. Even though the SWR is now finite, it may still be the case that no energy reaches the destination because the travelling component is purely supplying the losses. However, in a lossless medium, a finite SWR implies a definite transfer of energy to the destination. Examples One easy example to understand standing waves is two people shaking either end of a jump rope. If they shake in sync the rope can form a regular pattern of waves oscillating up and down, with stationary points along the rope where the rope is almost still (nodes) and points where the arc of the rope is maximum (antinodes). Acoustic resonance Standing waves are also observed in physical media such as strings and columns of air. Any waves traveling along the medium will reflect back when they reach the end. This effect is most noticeable in musical instruments where, at various multiples of a vibrating string or air column's natural frequency, a standing wave is created, allowing harmonics to be identified. Nodes occur at fixed ends and anti-nodes at open ends. If fixed at only one end, only odd-numbered harmonics are available. At the open end of a pipe the anti-node will not be exactly at the end as it is altered by its contact with the air and so end correction is used to place it exactly. The density of a string will affect the frequency at which harmonics will be produced; the greater the density the lower the frequency needs to be to produce a standing wave of the same harmonic. Visible light Standing waves are also observed in optical media such as optical waveguides and optical cavities. Lasers use optical cavities in the form of a pair of facing mirrors, which constitute a Fabry–Pérot interferometer. The gain medium in the cavity (such as a crystal) emits light coherently, exciting standing waves of light in the cavity. The wavelength of light is very short (in the range of nanometers, 10−9 m) so the standing waves are microscopic in size. One use for standing light waves is to measure small distances, using optical flats. X-rays Interference between X-ray beams can form an X-ray standing wave (XSW) field. Because of the short wavelength of X-rays (less than 1 nanometer), this phenomenon can be exploited for measuring atomic-scale events at material surfaces. The XSW is generated in the region where an X-ray beam interferes with a diffracted beam from a nearly perfect single crystal surface or a reflection from an X-ray mirror. By tuning the crystal geometry or X-ray wavelength, the XSW can be translated in space, causing a shift in the X-ray fluorescence or photoelectron yield from the atoms near the surface. This shift can be analyzed to pinpoint the location of a particular atomic species relative to the underlying crystal structure or mirror surface. The XSW method has been used to clarify the atomic-scale details of dopants in semiconductors, atomic and molecular adsorption on surfaces, and chemical transformations involved in catalysis. Mechanical waves Standing waves can be mechanically induced into a solid medium using resonance. One easy to understand example is two people shaking either end of a jump rope. If they shake in sync, the rope will form a regular pattern with nodes and antinodes and appear to be stationary, hence the name standing wave. Similarly a cantilever beam can have a standing wave imposed on it by applying a base excitation. In this case the free end moves the greatest distance laterally compared to any location along the beam. Such a device can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology. Seismic waves Standing surface waves on the Earth are observed as free oscillations of the Earth. Faraday waves The Faraday wave is a non-linear standing wave at the air-liquid interface induced by hydrodynamic instability. It can be used as a liquid-based template to assemble microscale materials. Seiches A seiche is an example of a standing wave in an enclosed body of water. It is characterised by the oscillatory behaviour of the water level at either end of the body and typically has a nodal point near the middle of the body where very little change in water level is observed. It should be distinguished from a simple storm surge where no oscillation is present. In sizeable lakes, the period of such oscillations may be between minutes and hours, for example Lake Geneva's longitudinal period is 73 minutes and its transversal seiche has a period of around 10 minutes, while Lake Huron can be seen to have resonances with periods between 1 and 2 hours. See Lake seiches.
Physical sciences
Waves
Physics
41789
https://en.wikipedia.org/wiki/Thermodynamic%20temperature
Thermodynamic temperature
Thermodynamic temperature is a quantity defined in thermodynamics as distinct from kinetic theory or statistical mechanics. Historically, thermodynamic temperature was defined by Lord Kelvin in terms of a macroscopic relation between thermodynamic work and heat transfer as defined in thermodynamics, but the kelvin was redefined by international agreement in 2019 in terms of phenomena that are now understood as manifestations of the kinetic energy of free motion of microscopic particles such as atoms, molecules, and electrons. From the thermodynamic viewpoint, for historical reasons, because of how it is defined and measured, this microscopic kinetic definition is regarded as an "empirical" temperature. It was adopted because in practice it can generally be measured more precisely than can Kelvin's thermodynamic temperature. A thermodynamic temperature of zero is of particular importance for the third law of thermodynamics. By convention, it is reported on the Kelvin scale of temperature in which the unit of measurement is the kelvin (unit symbol: K). For comparison, a temperature of 295 K corresponds to 21.85 °C and 71.33 °F. Overview Thermodynamic temperature, as distinct from SI temperature, is defined in terms of a macroscopic Carnot cycle. Thermodynamic temperature is of importance in thermodynamics because it is defined in purely thermodynamic terms. SI temperature is conceptually far different from thermodynamic temperature. Thermodynamic temperature was rigorously defined historically long before there was a fair knowledge of microscopic particles such as atoms, molecules, and electrons. The International System of Units (SI) specifies the international absolute scale for measuring temperature, and the unit of measure kelvin (unit symbol: K) for specific values along the scale. The kelvin is also used for denoting temperature intervals (a span or difference between two temperatures) as per the following example usage: "A 60/40 tin/lead solder is non-eutectic and is plastic through a range of 5 kelvins as it solidifies." A temperature interval of one degree Celsius is the same magnitude as one kelvin. The magnitude of the kelvin was redefined in 2019 in relation to the physical property underlying thermodynamic temperature: the kinetic energy of atomic free particle motion. The revision fixed the Boltzmann constant at exactly (J/K). The microscopic property that imbues material substances with a temperature can be readily understood by examining the ideal gas law, which relates, per the Boltzmann constant, how heat energy causes precisely defined changes in the pressure and temperature of certain gases. This is because monatomic gases like helium and argon behave kinetically like freely moving perfectly elastic and spherical billiard balls that move only in a specific subset of the possible motions that can occur in matter: that comprising the three translational degrees of freedom. The translational degrees of freedom are the familiar billiard ball-like movements along the X, Y, and Z axes of 3D space (see Fig. 1, below). This is why the noble gases all have the same specific heat capacity per atom and why that value is lowest of all the gases. Molecules (two or more chemically bound atoms), however, have internal structure and therefore have additional internal degrees of freedom (see Fig. 3, below), which makes molecules absorb more heat energy for any given amount of temperature rise than do the monatomic gases. Heat energy is born in all available degrees of freedom; this is in accordance with the equipartition theorem, so all available internal degrees of freedom have the same temperature as their three external degrees of freedom. However, the property that gives all gases their pressure, which is the net force per unit area on a container arising from gas particles recoiling off it, is a function of the kinetic energy borne in the freely moving atoms' and molecules' three translational degrees of freedom. Fixing the Boltzmann constant at a specific value, along with other rule making, had the effect of precisely establishing the magnitude of the unit interval of SI temperature, the kelvin, in terms of the average kinetic behavior of the noble gases. Moreover, the starting point of the thermodynamic temperature scale, absolute zero, was reaffirmed as the point at which zero average kinetic energy remains in a sample; the only remaining particle motion being that comprising random vibrations due to zero-point energy. Absolute zero of temperature Temperature scales are numerical. The numerical zero of a temperature scale is not bound to the absolute zero of temperature. Nevertheless, some temperature scales have their numerical zero coincident with the absolute zero of temperature. Examples are the International SI temperature scale, the Rankine temperature scale, and the thermodynamic temperature scale. Other temperature scales have their numerical zero far from the absolute zero of temperature. Examples are the Fahrenheit scale and the Celsius scale. At the zero point of thermodynamic temperature, absolute zero, the particle constituents of matter have minimal motion and can become no colder. Absolute zero, which is a temperature of zero kelvins (0 K), precisely corresponds to −273.15 °C and −459.67 °F. Matter at absolute zero has no remaining transferable average kinetic energy and the only remaining particle motion is due to an ever-pervasive quantum mechanical phenomenon called ZPE (zero-point energy). Though the atoms in, for instance, a container of liquid helium that was precisely at absolute zero would still jostle slightly due to zero-point energy, a theoretically perfect heat engine with such helium as one of its working fluids could never transfer any net kinetic energy (heat energy) to the other working fluid and no thermodynamic work could occur. Temperature is generally expressed in absolute terms when scientifically examining temperature's interrelationships with certain other physical properties of matter such as its volume or pressure (see Gay-Lussac's law), or the wavelength of its emitted black-body radiation. Absolute temperature is also useful when calculating chemical reaction rates (see Arrhenius equation). Furthermore, absolute temperature is typically used in cryogenics and related phenomena like superconductivity, as per the following example usage: "Conveniently, tantalum's transition temperature (T) of 4.4924 kelvin is slightly above the 4.2221 K boiling point of helium." Boltzmann constant The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE (zero-point energy) is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of gases. However, in temperature condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 2.5 MPa (25 bar)), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy. Rankine scale Though there have been many other temperature scales throughout history, there have been only two scales for measuring thermodynamic temperature which have absolute zero as their null point (0): The Kelvin scale and the Rankine scale. Throughout the scientific world where modern measurements are nearly always made using the International System of Units, thermodynamic temperature is measured using the Kelvin scale. The Rankine scale is part of English engineering units and finds use in certain engineering fields, particularly in legacy reference works. The Rankine scale uses the degree Rankine (symbol: °R) as its unit, which is the same magnitude as the degree Fahrenheit (symbol: °F). A unit increment of one kelvin is exactly 1.8 times one degree Rankine; thus, to convert a specific temperature on the Kelvin scale to the Rankine scale, , and to convert from a temperature on the Rankine scale to the Kelvin scale, . Consequently, absolute zero is "0" for both scales, but the melting point of water ice (0 °C and 273.15 K) is 491.67 °R. To convert temperature intervals (a span or difference between two temperatures), the formulas from the preceding paragraph are applicable; for instance, an interval of 5 kelvin is precisely equal to an interval of 9 degrees Rankine. Modern redefinition of the kelvin For 65 years, between 1954 and the 2019 revision of the SI, a temperature interval of one kelvin was defined as the difference between the triple point of water and absolute zero. The 1954 resolution by the International Bureau of Weights and Measures (known by the French-language acronym BIPM), plus later resolutions and publications, defined the triple point of water as precisely 273.16 K and acknowledged that it was "common practice" to accept that due to previous conventions (namely, that 0 °C had long been defined as the melting point of water and that the triple point of water had long been experimentally determined to be indistinguishably close to 0.01 °C), the difference between the Celsius scale and Kelvin scale is accepted as 273.15 kelvins; which is to say, 0 °C corresponds to 273.15 kelvins. The net effect of this as well as later resolutions was twofold: 1) they defined absolute zero as precisely 0 K, and 2) they defined that the triple point of special isotopically controlled water called Vienna Standard Mean Ocean Water occurred at precisely 273.16 K and 0.01 °C. One effect of the aforementioned resolutions was that the melting point of water, while very close to 273.15 K and 0 °C, was not a defining value and was subject to refinement with more precise measurements. The 1954 BIPM standard did a good job of establishing—within the uncertainties due to isotopic variations between water samples—temperatures around the freezing and triple points of water, but required that intermediate values between the triple point and absolute zero, as well as extrapolated values from room temperature and beyond, to be experimentally determined via apparatus and procedures in individual labs. This shortcoming was addressed by the International Temperature Scale of 1990, or ITS90, which defined 13 additional points, from 13.8033 K, to 1,357.77 K. While definitional, ITS90 had—and still has—some challenges, partly because eight of its extrapolated values depend upon the melting or freezing points of metal samples, which must remain exceedingly pure lest their melting or freezing points be affected—usually depressed. The 2019 revision of the SI was primarily for the purpose of decoupling much of the SI system's definitional underpinnings from the kilogram, which was the last physical artifact defining an SI base unit (a platinum/iridium cylinder stored under three nested bell jars in a safe located in France) and which had highly questionable stability. The solution required that four physical constants, including the Boltzmann constant, be definitionally fixed. Assigning the Boltzmann constant a precisely defined value had no practical effect on modern thermometry except for the most exquisitely precise measurements. Before the revision, the triple point of water was exactly 273.16 K and 0.01 °C and the Boltzmann constant was experimentally determined to be , where the "(51)" denotes the uncertainty in the two least significant digits (the 03) and equals a relative standard uncertainty of 0.37 ppm. Afterwards, by defining the Boltzmann constant as exactly , the 0.37 ppm uncertainty was transferred to the triple point of water, which became an experimentally determined value of (). That the triple point of water ended up being exceedingly close to 273.16 K after the SI revision was no accident; the final value of the Boltzmann constant was determined, in part, through clever experiments with argon and helium that used the triple point of water for their key reference temperature. Notwithstanding the 2019 revision, water triple-point cells continue to serve in modern thermometry as exceedingly precise calibration references at 273.16 K and 0.01 °C. Moreover, the triple point of water remains one of the 14 calibration points comprising ITS90, which spans from the triple point of hydrogen (13.8033 K) to the freezing point of copper (1,357.77 K), which is a nearly hundredfold range of thermodynamic temperature. Relationship of temperature, motions, conduction, and thermal energy Nature of kinetic energy, translational motion, and temperature The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three X, Y, and Z–axis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law's formula and is embodied in the gas laws. Though the kinetic energy borne exclusively in the three translational degrees of freedom comprise the thermodynamic temperature of a substance, molecules, as can be seen in Fig. 3, can have other degrees of freedom, all of which fall under three categories: bond length, bond angle, and rotational. All three additional categories are not necessarily available to all molecules, and even for molecules that can experience all three, some can be "frozen out" below a certain temperature. Nonetheless, all those degrees of freedom that are available to the molecules under a particular set of conditions contribute to the specific heat capacity of a substance; which is to say, they increase the amount of heat (kinetic energy) required to raise a given amount of the substance by one kelvin or one degree Celsius. The relationship of kinetic energy, mass, and velocity is given by the formula . Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity. The extent to which the kinetic energy of translational motion in a statistically significant collection of atoms or molecules in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: ). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particles' translational motion as follows: where: is the mean kinetic energy for an individual particle is the thermodynamic temperature of the bulk quantity of the substance While the Boltzmann constant is useful for finding the mean kinetic energy in a sample of particles, it is important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x-axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x-axis represents infinite temperature. Additionally, the x- and y-axes on both graphs are scaled proportionally. High speeds of translational motion Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second to in order to calculate their temperature. Formulas for calculating the velocity and speed of translational motion are given in the following footnote. It is neither difficult to imagine atomic motions due to kinetic temperature, nor distinguish between such motions and those due to zero-point energy. Consider the following hypothetical thought experiment, as illustrated in Fig. 2.5 at left, with an atom that is exceedingly close to absolute zero. Imagine peering through a common optical microscope set to 400 power, which is about the maximum practical magnification for optical microscopes. Such microscopes generally provide fields of view a bit over 0.4 mm in diameter. At the center of the field of view is a single levitated argon atom (argon comprises about 0.93% of air) that is illuminated and glowing against a dark backdrop. If this argon atom was at a beyond-record-setting one-trillionth of a kelvin above absolute zero, and was moving perpendicular to the field of view towards the right, it would require 13.9 seconds to move from the center of the image to the 200-micron tick mark; this travel distance is about the same as the width of the period at the end of this sentence on modern computer monitors. As the argon atom slowly moved, the positional jitter due to zero-point energy would be much less than the 200-nanometer (0.0002 mm) resolution of an optical microscope. Importantly, the atom's translational velocity of 14.43 microns per second constitutes all its retained kinetic energy due to not being precisely at absolute zero. Were the atom precisely at absolute zero, imperceptible jostling due to zero-point energy would cause it to very slightly wander, but the atom would perpetually be located, on average, at the same spot within the field of view. This is analogous to a boat that has had its motor turned off and is now bobbing slightly in relatively calm and windless ocean waters; even though the boat randomly drifts to and fro, it stays in the same spot in the long term and makes no headway through the water. Accordingly, an atom that was precisely at absolute zero would not be "motionless", and yet, a statistically significant collection of such atoms would have zero net kinetic energy available to transfer to any other collection of atoms. This is because regardless of the kinetic temperature of the second collection of atoms, they too experience the effects of zero-point energy. Such are the consequences of statistical mechanics and the nature of thermodynamics. Internal motions of molecules and internal energy As mentioned above, there are other ways molecules can jiggle besides the three translational degrees of freedom that imbue substances with their kinetic temperature. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements; these are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom (the X, Y, and Z axis). Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called "internal", the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as internal energy is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum. The kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions, is not contributing to the molecules' translational motions at that same instant. This extra kinetic energy simply increases the amount of internal energy that substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity. Different molecules absorb different amounts of internal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom. Diffusion of thermal energy: entropy, phonons, and mobile conduction electrons Heat conduction is the diffusion of thermal energy from hot parts of a system to cold parts. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases). One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more. Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at the speed of sound of a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam. Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity. Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion, However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they are delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons. Diffusion of thermal energy: black-body radiation Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see ). Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process. As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system. Table of thermodynamic temperatures The table below shows various points on the thermodynamic scale, in order of increasing temperature. Heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin. Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green. At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules, converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy cannot make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance. As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it is called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements. If the substance is one of the monatomic gases (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole. Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times. The phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase. Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above); water vapors (gas phase) are liquefied on the skin with releasing a large amount of energy (enthalpy) to the environment including the skin, resulting in skin damage. In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity); the water evaporation on the skin takes a large amount of energy from the environment including the skin, reducing the skin temperature. Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when the pools are not in use) are so effective at reducing heating costs: they prevent evaporation. (In other words, taking energy from water when it is evaporated is limited.) For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water . Internal energy The total energy of all translational and internal particle motions, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy of a substance comprise the internal energy of it. Internal energy at absolute zero As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic energy or temperature decreases); the internal motions of molecules diminish (their internal energy or temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower; and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When particles of a substance are as close as possible to complete rest and retain only ZPE (zero-point energy)-induced quantum mechanical motion, the substance is at the temperature of absolute zero ( = 0). Whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero internal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably,  = 0 helium remains liquid at room pressure (Fig. 9 at right) and must be under a pressure of at least to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid–solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one. The above complexities make for rather cumbersome blanket statements regarding the internal energy in  = 0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy. One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration). Lastly, all  = 0 substances contain zero kinetic thermal energy. Practical applications for thermodynamic temperature Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay-Lussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold pressure of 200 kPa, then its absolute pressure is 300 kPa. Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as  = 6.8% greater thermodynamic temperature and absolute pressure; that is, an absolute pressure of 320 kPa, which is a of 220 kPa. Relationship to ideal gas law The thermodynamic temperature is closely linked to the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio of two temperatures and is the same in all absolute scales. Strictly speaking, the temperature of a system is well-defined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena. Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, , and a lower temperature heat sink, , through a gas filled piston. The work done per cycle is equal in magnitude to net heat taken up, which is sum of the heat taken up by the engine from the high-temperature source, plus the waste heat given off by the engine, < 0. The efficiency of the engine is the work divided by the heat put into the system or where is the work done per cycle. Thus the efficiency depends only on . Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures and must have the same efficiency, that is to say, the efficiency is the function of only temperatures In addition, a reversible heat engine operating between a pair of thermal reservoirs at temperatures and must have the same efficiency as one consisting of two cycles, one between and another (intermediate) temperature , and the second between and . If this were not the case, then energy (in the form of ) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles as an engine design choice, and any reversible engine between the same reservoir at and must be equally efficient regardless of the engine design. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as below. Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at . We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following. In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3. With this understanding of , and , mathematically, But since the first function is not a function of , the product of the final two functions must result in the removal of as a variable. The only way is therefore to define the function as follows: and so that I.e. the ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale. Such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of and , and hence derive that the (complete) Carnot cycle is isentropic: Substituting this back into our first formula for efficiency yields a relationship in terms of temperature: Note that for the efficiency is 100% and that efficiency becomes greater than 100% for , which is unrealistic. Subtracting 1 from the right hand side of the Equation (4) and the middle portion gives and thus The generalization of this equation is the Clausius theorem, which proposes the existence of a state function (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by where the subscript rev indicates heat transfer in a reversible process. The function is the entropy of the system, mentioned previously, and the change of around any cycle is zero (as is necessary for any state function). The Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid a logic loop, we should first define entropy through statistical mechanics): For a constant-volume system (so no mechanical work ) in which the entropy is a function of its internal energy , and the thermodynamic temperature is therefore given by so that the reciprocal of the thermodynamic temperature is the rate of change of entropy with respect to the internal energy at the constant volume. History Guillaume Amontons (1663–1705) published two papers in 1702 and 1703 that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 °C—only 33.15 degrees short of the true value of −273.15 °C. Amonton's discovery of a one-to-one relationship between absolute temperature and absolute pressure was rediscovered a century later and popularized within the scientific community by Joseph Louis Gay-Lussac. Today, this principle of thermodynamics is commonly known as Gay-Lussac's law but is also known as Amonton's law. In 1742, Anders Celsius (1701–1744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level. Coincident with the death of Anders Celsius in 1744, the botanist Carl Linnaeus (1707–1778) effectively reversed Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made Linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (BIPM). The 9th CGPM (General Conference on Weights and Measures and the CIPM (International Committee for Weights and Measures formally adopted degree Celsius (symbol: °C) in 1948. In his book Pyrometrie (1777) completed four months before his death, Johann Heinrich Lambert (1728–1777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C. Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746–1823) is often credited with discovering (circa 1787), but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was . Joseph Louis Gay-Lussac (1778–1850) published work in 1802 (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C). William Thomson (1824–1907), also known as Lord Kelvin, wrote in his 1848 paper "On an Absolute Thermometric Scale" of the need for a scale whereby infinite cold (absolute zero) was the scale's zero point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the kelvin thermodynamic temperature scale. Thomson's value of −273 was derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C. In the paper he proposed to define temperature using idealized heat engines. In detail, he proposed that, given three heat reservoirs at temperatures , if two reversible heat engines (Carnot engine), one working between and another between , can produce the same amount of mechanical work by letting the same amount of heat pass through, then define . Note that like Carnot, Kelvin worked under the assumption that heat is conserved ("the conversion of heat (or caloric) into mechanical effect is probably impossible"), and if heat goes into the heat engine, then heat must come out. Kelvin, realizing after Joule's experiments that heat is not a conserved quantity but is convertible with mechanical work, modified his scale in the 1851 work An Account of Carnot's Theory of the Motive Power of Heat. In this work, he defined as follows: The above definition fixes the ratios between absolute temperatures, but it does not fix a scale for absolute temperature. For the scale, Thomson proposed to use the Celsius degree, that is, the interval between the freezing and the boiling point of water. In 1859 Macquorn Rankine (1820–1872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment, that is, the interval between the freezing and the boiling point of water. This absolute scale is known today as the Rankine thermodynamic temperature scale. Ludwig Boltzmann (1844–1906) made major contributions to thermodynamics between 1877 and 1884 through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics. Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed in the 1930s that absolute zero was equivalent to −273.15 °C. Resolution 3 of the 9th General Conference on Weights and Measures (CGPM) in 1948 fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the International Committee for Weights and Measures (CIPM) and the CGPM formally adopted the name Celsius for the degree Celsius and the Celsius temperature scale. Resolution 3 of the 10th CGPM in 1954 gave the kelvin scale its modern definition by choosing the triple point of water as its upper defining point (with no change to absolute zero being the null point) and assigning it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and −273.15 °C. Resolution 3 of the 13th CGPM in 1967/1968 renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol . Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water". The CIPM affirmed in 2005 that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water. In November 2018, the 26th General Conference on Weights and Measures (CGPM) changed the definition of the Kelvin by fixing the Boltzmann constant to when expressed in the unit J/K. This change (and other changes in the definition of SI units) was made effective on the 144th anniversary of the Metre Convention, 20 May 2019.
Physical sciences
Thermodynamics
Physics
41799
https://en.wikipedia.org/wiki/Time%20standard
Time standard
A time standard is a specification for measuring time: either the rate at which time passes or points in time or both. In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. An example of a kind of time standard can be a time scale, specifying a method for measuring divisions of time. A standard for civil time can specify both time intervals and time-of-day. Standardized time measurements are made using a clock to count periods of some period changes, which may be either the changes of a natural phenomenon or of an artificial machine. Historically, time standards were often based on the Earth's rotational period. From the late 18 century to the 19th century it was assumed that the Earth's daily rotational rate was constant. Astronomical observations of several kinds, including eclipse records, studied in the 19th century, raised suspicions that the rate at which Earth rotates is gradually slowing and also shows small-scale irregularities, and this was confirmed in the early twentieth century. Time standards based on Earth rotation were replaced (or initially supplemented) for astronomical use from 1952 onwards by an ephemeris time standard based on the Earth's orbital period and in practice on the motion of the Moon. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards, for most practical purposes, by newer time standards based wholly or partly on atomic time. Various types of second and day are used as the basic time interval for most time scales. Other intervals of time (minutes, hours, and years) are usually defined in terms of these two. Terminology The term "time" is generally used for many close but different concepts, including: instant as an object – one point on the time axis. Being an object, it has no value; date as a quantity characterising an instant. As a quantity, it has a value which may be expressed in a variety of ways, for example "2014-04-26T09:42:36,75" in ISO standard format, or more colloquially such as "today, 9:42 a.m."; time interval as an object – part of the time axis limited by two instants. Being an object, it has no value; duration as a quantity characterizing a time interval. As a quantity, it has a value, such as a number of minutes, or may be described in terms of the quantities (such as times and dates) of its beginning and end. chronology, an ordered sequence of events in the past. Chronologies can be put into chronological groups (periodization). One of the most important systems of periodization is the geologic time scale, which is a system of periodizing the events that shaped the Earth and its life. Chronology, periodization, and interpretation of the past are together known as the study of history. Definitions of the second There have only ever been three definitions of the second: as a fraction of the day, as a fraction of an extrapolated year, and as the microwave frequency of a caesium atomic clock. In early history, clocks were not accurate enough to track seconds. After the invention of mechanical clocks, the CGS system and MKS system of units both defined the second as of a mean solar day. MKS was adopted internationally during the 1940s. In the late 1940s, quartz crystal oscillator clocks could measure time more accurately than the rotation of the Earth. Metrologists also knew that Earth's orbit around the Sun (a year) was much more stable than Earth's rotation. This led to the definition of ephemeris time and the tropical year, and the ephemeris second was defined as "the fraction of the tropical year for 1900 January 0 at 12 hours ephemeris time". This definition was adopted as part of the International System of Units in 1960. Most recently, atomic clocks have been developed that offer improved accuracy. Since 1967, the SI base unit for time is the SI second, defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (at a temperature of 0 K and at mean sea level). The SI second is the basis of all atomic timescales, e.g. coordinated universal time, GPS time, International Atomic Time, etc. Current time standards Geocentric Coordinate Time (TCG) is a coordinate time having its spatial origin at the center of Earth's mass. TCG is a theoretical ideal, and any particular realization will have measurement error. International Atomic Time (TAI) is the primary physically realized time standard. TAI is produced by the International Bureau of Weights and Measures (BIPM), and is based on the combined input of many atomic clocks around the world, each corrected for environmental and relativistic effects (both gravitational and because of speed, like in GNSS). TAI is not related to TCG directly but rather is a realization of Terrestrial Time (TT), a theoretical timescale that is a rescaling of TCG such that the time rate approximately matches proper time at mean sea level. Universal Time (UT1) is the Earth Rotation Angle (ERA) linearly scaled to match historical definitions of mean solar time at 0° longitude. At high precision, Earth's rotation is irregular and is determined from the positions of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as GPS satellite orbits. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate UT1. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the "leap second". To date these steps (and difference "TAI-UTC") have always been positive. The Global Positioning System broadcasts a very precise time signal worldwide, along with instructions for converting GPS time (GPST) to UTC. It was defined with a constant offset from TAI: GPST = TAI - 19 s. The GPS time standard is maintained independently but regularly synchronized with or from, UTC time. Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. The offset is chosen such that a new day starts approximately while the Sun is crossing the nadir meridian. Alternatively the difference is not really fixed, but it changes twice a year by a round amount, usually one hour, see Daylight saving time. Julian day number is a count of days elapsed since Greenwich mean noon on 1 January 4713 B.C., Julian proleptic calendar. The Julian Date is the Julian day number followed by the fraction of the day elapsed since the preceding noon. Conveniently for astronomers, this avoids the date skip during an observation night. Modified Julian day (MJD) is defined as MJD = JD - 2400000.5. An MJD day thus begins at midnight, civil date. Julian dates can be expressed in UT1, TAI, TT, etc. and so for precise applications the timescale should be specified, e.g. MJD 49135.3824 TAI. Barycentric Coordinate Time (TCB) is a coordinate time having its spatial origin at the center of mass of the Solar System, which is called the barycenter. Conversions Conversions between atomic time systems (TAI, GPST, and UTC) are for the most part exact. However, GPS time is a measured value as opposed to a computed "paper" scale. As such it may differ from UTC(USNO) by a few hundred nanoseconds, which in turn may differ from official UTC by as much as 26 nanoseconds. Conversions for UT1 and TT rely on published difference tables which are specified to 10 microseconds and 0.1 nanoseconds respectively. Definitions: LS = TAI − UTC = leap seconds from USNO Table of Leap Seconds DUT1 = UT1 − UTC published in IERS Bulletins or U.S. Naval Observatory EO DTT = TT − TAI − 32.184 s published in BIPM's TT(BIPM) tables. TCG is linearly related to TT as: TCG − TT = LG × (JD − 2443144.5) × 86400 seconds, with the scale difference LG defined as 6.969290134 exactly. TCB is a linear transformation of TDB and TDB differs from TT in small, mostly periodic terms. Neglecting these terms (on the order of 2 milliseconds for several millennia around the present epoch), TCB is related to TT by: TCB − TT = LB × (JD − 2443144.5) × 86400 seconds. The scale difference LB has been defined by the IAU to be 1.550519768e-08 exactly. Time standards based on Earth rotation Apparent solar time or true solar time is based on the solar day, which is the period between one solar noon (passage of the real Sun across the meridian) and the next. A solar day is approximately 24 hours of mean time. Because the Earth's orbit around the Sun is elliptical, and because of the obliquity of the Earth's axis relative to the plane of the orbit (the ecliptic), the apparent solar day varies a few dozen seconds above or below the mean value of 24 hours. As the variation accumulates over a few weeks, there are differences as large as 16 minutes between apparent solar time and mean solar time (see Equation of time). However, these variations cancel out over a year. There are also other perturbations such as Earth's wobble, but these are less than a second per year. Sidereal time is time by the stars. A sidereal rotation is the time it takes the Earth to make one revolution with rotation to the stars, approximately 23 hours 56 minutes 4 seconds. A mean solar day is about 3 minutes 56 seconds longer than a mean sidereal day, or more than a mean sidereal day. In astronomy, sidereal time is used to predict when a star will reach its highest point in the sky. For accurate astronomical work on land, it was usual to observe sidereal time rather than solar time to measure mean solar time, because the observations of 'fixed' stars could be measured and reduced more accurately than observations of the Sun (in spite of the need to make various small compensations, for refraction, aberration, precession, nutation and proper motion). It is well known that observations of the Sun pose substantial obstacles to the achievement of accuracy in measurement. In former times, before the distribution of accurate time signals, it was part of the routine work at any observatory to observe the sidereal times of meridian transit of selected 'clock stars' (of well-known position and movement), and to use these to correct observatory clocks running local mean sidereal time; but nowadays local sidereal time is usually generated by computer, based on time signals. Mean solar time was a time standard used especially at sea for navigational purposes, calculated by observing apparent solar time and then adding to it a correction, the equation of time, which compensated for two known irregularities in the length of the day, caused by the ellipticity of the Earth's orbit and the obliquity of the Earth's equator and polar axis to the ecliptic (which is the plane of the Earth's orbit around the sun). It has been superseded by Universal Time. Greenwich Mean Time was originally mean time deduced from meridian observations made at the Royal Greenwich Observatory (RGO). The principal meridian of that observatory was chosen in 1884 by the International Meridian Conference to be the Prime Meridian. GMT either by that name or as 'mean time at Greenwich' used to be an international time standard, but is no longer so; it was initially renamed in 1928 as Universal Time (UT) (partly as a result of ambiguities arising from the changed practice of starting the astronomical day at midnight instead of at noon, adopted as from 1 January 1925). UT1 is still in reality mean time at Greenwich. Today, GMT is a time zone but is still the legal time in the UK in winter (and as adjusted by one hour for summer time). But Coordinated Universal Time (UTC) (an atomic-based time scale which is always kept within 0.9 second of UT1) is in common actual use in the UK, and the name GMT is often used to refer to it. (See articles Greenwich Mean Time, Universal Time, Coordinated Universal Time and the sources they cite.) Versions of Universal Time such as UT0 and UT2 have been defined but are no longer in use. Time standards for planetary motion calculations Ephemeris time (ET) and its successor time scales described below have all been intended for astronomical use, e.g. in planetary motion calculations, with aims including uniformity, in particular, freedom from irregularities of Earth rotation. Some of these standards are examples of dynamical time scales and/or of coordinate time scales. Ephemeris Time was from 1952 to 1976 an official time scale standard of the International Astronomical Union; it was a dynamical time scale based on the orbital motion of the Earth around the Sun, from which the ephemeris second was derived as a defined fraction of the tropical year. This ephemeris second was the standard for the SI second from 1956 to 1967, and it was also the source for calibration of the caesium atomic clock; its length has been closely duplicated, to within 1 part in 1010, in the size of the current SI second referred to atomic time. This Ephemeris Time standard was non-relativistic and did not fulfil growing needs for relativistic coordinate time scales. It was in use for the official almanacs and planetary ephemerides from 1960 to 1983, and was replaced in official almanacs for 1984 and after, by numerically integrated Jet Propulsion Laboratory Development Ephemeris DE200 (based on the JPL relativistic coordinate time scale Teph). For applications at the Earth's surface, ET's official replacement was Terrestrial Dynamical Time (TDT), which maintained continuity with it. TDT is a uniform atomic time scale, whose unit is the SI second. TDT is tied in its rate to the SI second, as is International Atomic Time (TAI), but because TAI was somewhat arbitrarily defined at its inception in 1958 to be initially equal to a refined version of UT, TDT was offset from TAI, by a constant 32.184 seconds. The offset provided a continuity from Ephemeris Time to TDT. TDT has since been redefined as Terrestrial Time (TT). For the calculation of ephemerides, Barycentric Dynamical Time (TDB) was officially recommended to replace ET. TDB is similar to TDT but includes relativistic corrections that move the origin to the barycenter, hence it is a dynamical time at the barycenter. TDB differs from TT only in periodic terms. The difference is at most 2 milliseconds. Deficiencies were found in the definition of TDB (though not affecting Teph), and TDB has been replaced by Barycentric Coordinate Time (TCB) and Geocentric Coordinate Time (TCG), and redefined to be JPL ephemeris time argument Teph, a specific fixed linear transformation of TCB. As defined, TCB (as observed from the Earth's surface) is of divergent rate relative to all of ET, Teph and TDT/TT; and the same is true, to a lesser extent, of TCG. The ephemerides of Sun, Moon and planets in current widespread and official use continue to be those calculated at the Jet Propulsion Laboratory (updated as from 2003 to DE405) using as argument Teph.
Technology
Timekeeping
null
41811
https://en.wikipedia.org/wiki/Transmission%20line
Transmission line
In electrical engineering, a transmission line is a specialized cable or other structure designed to conduct electromagnetic waves in a contained manner. The term applies when the conductors are long enough that the wave nature of the transmission must be taken into account. This applies especially to radio-frequency engineering because the short wavelengths mean that wave phenomena arise over very short distances (this can be as short as millimetres depending on frequency). However, the theory of transmission lines was historically developed to explain phenomena on very long telegraph lines, especially submarine telegraph cables. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed lines or feeders), distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses. RF engineers commonly use short pieces of transmission line, usually in the form of printed planar transmission lines, arranged in certain patterns to build circuits such as filters. These circuits, known as distributed-element circuits, are an alternative to traditional circuits using discrete capacitors and inductors. Overview Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power, which reverses direction 100 to 120 times per second, and audio signals. However, they are not generally used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line (ladder line, twisted pair), coaxial cable, and planar transmission lines such as stripline and microstrip. The higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At frequencies of microwave and higher, power losses in transmission lines become excessive, and waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves. Some sources define waveguides as a type of transmission line; however, this article will not include them. History Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin, and Oliver Heaviside. In 1855, Lord Kelvin formulated a diffusion model of the current in a submarine cable. The model correctly predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885, Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. The four terminal model For the purposes of analysis, an electrical transmission line can be modelled as a two-port network (also called a quadripole), as follows: In the simplest case, the network is assumed to be linear (i.e. the complex voltage across either port is proportional to the complex current flowing into it when there are no reflections), and the two ports are assumed to be interchangeable. If the transmission line is uniform along its length, then its behaviour is largely described by a two parameters called characteristic impedance, symbol Z0 and propagation delay, symbol . Z0 is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, and about 300 ohms for a common type of untwisted pair used in radio transmission. Propagation delay is proportional to the length of the transmission line and is never less than the length divided by the speed of light. Typical delays for modern communication transmission lines vary from to . When sending power down a transmission line, it is usually desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched. Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ohmic or resistive loss (see ohmic heating). At high frequencies, another effect called dielectric loss becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating). The transmission line is modelled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line. The total loss of power in a transmission line is often specified in decibels per metre (dB/m), and usually depends on the frequency of the signal. The manufacturer often supplies a chart showing the loss in dB/m at a range of frequencies. A loss of 3 dB corresponds approximately to a halving of the power. Propagation delay is often specified in units of nanoseconds per metre. While propagation delay usually depends on the frequency of the signal, transmission lines are typically operated over frequency ranges where the propagation delay is approximately constant. Telegrapher's equations The telegrapher's equations (or just telegraph equations) are a pair of linear differential equations which describe the voltage () and current () on an electrical transmission line with distance and time. They were developed by Oliver Heaviside who created the transmission line model, and are based on Maxwell's equations. The transmission line model is an example of the distributed-element model. It represents the transmission line as an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line: The distributed resistance of the conductors is represented by a series resistor (expressed in ohms per unit length). The distributed inductance (due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (in henries per unit length). The capacitance between the two conductors is represented by a shunt capacitor (in farads per unit length). The conductance of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (in siemens per unit length). The model consists of an infinite series of the elements shown in the figure, and the values of the components are specified per unit length so the picture of the component can be misleading. , , , and may also be functions of frequency. An alternative notation is to use , , and to emphasize that the values are derivatives with respect to length. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the propagation constant, attenuation constant and phase constant. The line voltage and the current can be expressed in the frequency domain as (see differential equation, angular frequency ω and imaginary unit ) Special case of a lossless line When the elements and are negligibly small the transmission line is considered as a lossless structure. In this hypothetical case, the model depends only on the and elements which greatly simplifies the analysis. For a lossless transmission line, the second order steady-state Telegrapher's equations are: These are wave equations which have plane waves with equal propagation speed in the forward and reverse directions as solutions. The physical significance of this is that electromagnetic waves propagate down transmission lines and in general, there is a reflected component that interferes with the original signal. These equations are fundamental to transmission line theory. General case of a line with losses In the general case the loss terms, and , are both included, and the full form of the Telegrapher's equations become: where is the (complex) propagation constant. These equations are fundamental to transmission line theory. They are also wave equations, and have solutions similar to the special case, but which are a mixture of sines and cosines with exponential decay factors. Solving for the propagation constant in terms of the primary parameters , , , and gives: and the characteristic impedance can be expressed as The solutions for and are: The constants must be determined from boundary conditions. For a voltage pulse , starting at and moving in the positive  direction, then the transmitted pulse at position can be obtained by computing the Fourier Transform, , of , attenuating each frequency component by , advancing its phase by , and taking the inverse Fourier Transform. The real and imaginary parts of can be computed as with the right-hand expressions holding when neither , nor , nor is zero, and with where atan2 is the everywhere-defined form of two-parameter arctangent function, with arbitrary value zero when both arguments are zero. Alternatively, the complex square root can be evaluated algebraically, to yield: and with the plus or minus signs chosen opposite to the direction of the wave's motion through the conducting medium. ( is usually negative, since and are typically much smaller than and , respectively, so is usually positive. is always positive.) Special, low loss case For small losses and high frequencies, the general equations can be simplified: If and then Since an advance in phase by is equivalent to a time delay by , can be simply computed as Heaviside condition The Heaviside condition is . If R, G, L, and C are constants that are not frequency dependent and the Heaviside condition is met, then waves travel down the transmission line without dispersion distortion. Input impedance of transmission line The characteristic impedance of a transmission line is the ratio of the amplitude of a single voltage wave to its current wave. Since most transmission lines also have a reflected wave, the characteristic impedance is generally not the impedance that is measured on the line. The impedance measured at a given distance from the load impedance may be expressed as , where is the propagation constant and is the voltage reflection coefficient measured at the load end of the transmission line. Alternatively, the above formula can be rearranged to express the input impedance in terms of the load impedance rather than the load voltage reflection coefficient: . Input impedance of lossless transmission line For a lossless transmission line, the propagation constant is purely imaginary, , so the above formulas can be rewritten as where is the wavenumber. In calculating the wavelength is generally different inside the transmission line to what it would be in free-space. Consequently, the velocity factor of the material the transmission line is made of needs to be taken into account when doing such a calculation. Special cases of lossless transmission lines Half wave length For the special case where where n is an integer (meaning that the length of the line is a multiple of half a wavelength), the expression reduces to the load impedance so that for all This includes the case when , meaning that the length of the transmission line is negligibly small compared to the wavelength. The physical significance of this is that the transmission line can be ignored (i.e. treated as a wire) in either case. Quarter wave length For the case where the length of the line is one quarter wavelength long, or an odd multiple of a quarter wavelength long, the input impedance becomes Matched load Another special case is when the load impedance is equal to the characteristic impedance of the line (i.e. the line is matched), in which case the impedance reduces to the characteristic impedance of the line so that for all and all . Short For the case of a shorted load (i.e. ), the input impedance is purely imaginary and a periodic function of position and wavelength (frequency) Open For the case of an open load (i.e. ), the input impedance is once again imaginary and periodic Matrix parameters The simulation of transmission lines embedded into larger systems generally utilize admittance parameters (Y matrix), impedance parameters (Z matrix), and/or scattering parameters (S matrix) that embodies the full transmission line model needed to support the simulation. Admittance parameters Admittance (Y) parameters may be defined by applying a fixed voltage to one port (V1) of a transmission line with the other end shorted to ground and measuring the resulting current running into each port (I1, I2) and computing the admittance on each port as a ratio of I/V The admittance parameter Y11 is I1/V1, and the admittance parameter Y12 is I2/V1. Since transmission lines are electrically passive and symmetric devices, Y12 = Y21, and Y11 = Y22. For lossless and lossy transmission lines respectively, the Y parameter matrix is as follows: Impedance parameters Impedance (Z) parameter may defines by applying a fixed current into one port (I1) of a transmission line with the other port open and measuring the resulting voltage on each port (V1, V2) and computing the impedance parameter Z11 is V1/I1, and the impedance parameter Z12 is V2/I1. Since transmission lines are electrically passive and symmetric devices, V12 = V21, and V11 = V22. In the Y and Z matrix definitions, and . Unlike ideal lumped 2 port elements (resistors, capacitors, inductors, etc.) which do not have defined Z parameters, transmission lines have an internal path to ground, which permits the definition of Z parameters. For lossless and lossy transmission lines respectively, the Z parameter matrix is as follows: Scattering parameters Scattering (S) matrix parameters model the electrical behavior of the transmission line with matched loads at each termination. For lossless and lossy transmission lines respectively, the S parameter matrix is as follows, using standard hyperbolic to circular complex translations. Variable definitions In all matrix parameters above, the following variable definitions apply: = characteristic impedance Zp = port impedance, or termination impedance = the propagation constant per unit length = attenuation constant in nepers per unit length = wave number or phase constant radians per unit length = frequency radians / second = Speed of propagation = wave length in unit length L = inductance per unit length C = capacitance per unit length = effective dielectric constant = 299,792,458 meters / second = Speed of light in a vacuum Coupled transmission lines Transmission lines may be placed in proximity to each other such that they electrically interact, such as two microstrip lines in close proximity. Such transmission lines are said to be coupled transmission lines. Coupled transmission lines are characterized by an even and odd mode analysis. The even mode is characterized by excitation of the two conductors with a signal of equal amplitude and phase. The odd mode is characterized by excitation with signals of equal and opposite magnitude. The even and odd modes each have their own characteristic impedances (Zoe, Zoo) and phase constants (). Lossy coupled transmission lines have their own even and odd mode attenuation constants (), which in turn leads to even and odd mode propagation constants (). Coupled matrix parameters Coupled transmission lines may be modeled using even and odd mode transmission line parameters defined in the prior paragraph as shown with ports 1 and 2 on the input and ports 3 and 4 on the output, .. Practical types Coaxial cable Coaxial lines confine virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them. In radio-frequency applications up to a few gigahertz, the wave propagates in the transverse electric and magnetic mode (TEM) only, which means that the electric and magnetic fields are both perpendicular to the direction of propagation (the electric field is radial, and the magnetic field is circumferential). However, at frequencies for which the wavelength (in the dielectric) is significantly shorter than the circumference of the cable other transverse modes can propagate. These modes are classified into two groups, transverse electric (TE) and transverse magnetic (TM) waveguide modes. When more than one mode can exist, bends and other irregularities in the cable geometry can cause power to be transferred from one mode to another. The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. In the middle 20th century they carried long distance telephone connections. Planar lines Planar transmission lines are transmission lines with conductors, or in some cases dielectric strips, that are flat, ribbon-shaped lines. They are used to interconnect components on printed circuits and integrated circuits working at microwave frequencies because the planar type fits in well with the manufacturing methods for these components. Several forms of planar transmission lines exist. Microstrip A microstrip circuit uses a thin flat conductor which is parallel to a ground plane. Microstrip can be made by having a strip of copper on one side of a printed circuit board (PCB) or ceramic substrate while the other side is a continuous ground plane. The width of the strip, the thickness of the insulating layer (PCB or ceramic) and the dielectric constant of the insulating layer determine the characteristic impedance. Microstrip is an open structure whereas coaxial cable is a closed structure. Stripline A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line. Coplanar waveguide A coplanar waveguide consists of a center strip and two adjacent outer conductors, all three of them flat structures that are deposited onto the same insulating substrate and thus are located in the same plane ("coplanar"). The width of the center conductor, the distance between inner and outer conductors, and the relative permittivity of the substrate determine the characteristic impedance of the coplanar transmission line. Balanced lines A balanced line is a transmission line consisting of two conductors of the same type, and equal impedance to ground and other circuits. There are many formats of balanced lines, amongst the most common are twisted pair, star quad and twin-lead. Twisted pair Twisted pairs are commonly used for terrestrial telephone communications. In such cables, many pairs are grouped together in a single cable, from two to several thousand. The format is also used for data network distribution inside buildings, but the cable is more expensive because the transmission line parameters are tightly controlled. Star quad Star quad is a four-conductor cable in which all four conductors are twisted together around the cable axis. It is sometimes used for two circuits, such as 4-wire telephony and other telecommunications applications. In this configuration each pair uses two non-adjacent conductors. Other times it is used for a single, balanced line, such as audio applications and 2-wire telephony. In this configuration two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together. When used for two circuits, crosstalk is reduced relative to cables with two separate twisted pairs. When used for a single, balanced line, magnetic interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by coupling transformers. The combined benefits of twisting, balanced signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low signal level applications such as microphone cables, even when installed very close to a power cable. The disadvantage is that star quad, in combining two conductors, typically has double the capacitance of similar two-conductor twisted and shielded audio cable. High capacitance causes increasing distortion and greater loss of high frequencies as distance increases. Twin-lead Twin-lead consists of a pair of conductors held apart by a continuous insulator. By holding the conductors a known distance apart, the geometry is fixed and the line characteristics are reliably consistent. It is lower loss than coaxial cable because the characteristic impedance of twin-lead is generally higher than coaxial cable, leading to lower resistive losses due to the reduced current. However, it is more susceptible to interference. Lecher lines Lecher lines are a form of parallel conductor that can be used at UHF for creating resonant circuits. They are a convenient practical format that fills the gap between lumped components (used at HF/VHF) and resonant cavities (used at UHF/SHF). Single-wire line Unbalanced lines were formerly much used for telegraph transmission, but this form of communication has now fallen into disuse. Cables are similar to twisted pair in that many cores are bundled into the same cable but only one conductor is provided per circuit and there is no twisting. All the circuits on the same route use a common path for the return current (earth return). There is a power transmission version of single-wire earth return in use in many locations. General applications Signal transfer Electrical transmission lines are very widely used to transmit high frequency signals over long or short distances with minimum power loss. One familiar example is the down lead from a TV or radio aerial to the receiver. Transmission line circuits A large variety of circuits can also be constructed with transmission lines including impedance matching circuits, filters, power dividers and directional couplers. Stepped transmission line A stepped transmission line is used for broad range impedance matching. It can be considered as multiple transmission line segments connected in series, with the characteristic impedance of each individual element to be . The input impedance can be obtained from the successive application of the chain relation where is the wave number of the -th transmission line segment and is the length of this segment, and is the front-end impedance that loads the -th segment. Because the characteristic impedance of each transmission line segment is often different from the impedance of the fourth, input cable (only shown as an arrow marked on the left side of the diagram above), the impedance transformation circle is off-centred along the axis of the Smith Chart whose impedance representation is usually normalized against . Approximating lumped elements At higher frequencies, the reactive parasitic effects of real world lumped elements, including inductors and capacitors, limits their usefulness. Therefore, it is sometimes useful to approximate the electrical characteristics of inductors and capacitors with transmission lines at the higher frequencies using Richards' Transformations and then substitute the transmission lines for the lumped elements. More accurate forms of multimode high frequency inductor modeling with transmission lines exist for advanced designers. Stub filters If a short-circuited or open-circuited transmission line is wired in parallel with a line used to transfer signals from point A to point B, then it will function as a filter. The method for making stubs is similar to the method for using Lecher lines for crude frequency measurement, but it is 'working backwards'. One method recommended in the RSGB's radiocommunication handbook is to take an open-circuited length of transmission line wired in parallel with the feeder delivering signals from an aerial. By cutting the free end of the transmission line, a minimum in the strength of the signal observed at a receiver can be found. At this stage the stub filter will reject this frequency and the odd harmonics, but if the free end of the stub is shorted then the stub will become a filter rejecting the even harmonics. Wideband filters can be achieved using multiple stubs. However, this is a somewhat dated technique. Much more compact filters can be made with other methods such as parallel-line resonators. Pulse generation Transmission lines are used as pulse generators. By charging the transmission line and then discharging it into a resistive load, a rectangular pulse equal in length to twice the electrical length of the line can be obtained, although with half the voltage. A Blumlein transmission line is a related pulse forming device that overcomes this limitation. These are sometimes used as the pulsed power sources for radar transmitters and other devices. Sound The theory of sound wave propagation is very similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are also used to build structures to conduct acoustic waves; and these are called acoustic transmission lines.
Technology
Signal transmission
null
41822
https://en.wikipedia.org/wiki/Troposphere
Troposphere
The troposphere is the lowest layer of the atmosphere of Earth. It contains 80% of the total mass of the planetary atmosphere and 99% of the total mass of water vapor and aerosols, and is where most weather phenomena occur. From the planetary surface of the Earth, the average height of the troposphere is in the tropics; in the middle latitudes; and in the high latitudes of the polar regions in winter; thus the average height of the troposphere is . The term troposphere derives from the Greek words tropos (rotating) and sphaira (sphere) indicating that rotational turbulence mixes the layers of air and so determines the structure and the phenomena of the troposphere. The rotational friction of the troposphere against the planetary surface affects the flow of the air, and so forms the planetary boundary layer (PBL) that varies in height from hundreds of meters up to . The measures of the PBL vary according to the latitude, the landform, and the time of day when the meteorological measurement is realized. Atop the troposphere is the tropopause, which is the functional atmospheric border that demarcates the troposphere from the stratosphere. As such, because the tropopause is an inversion layer in which air-temperature increases with altitude, the temperature of the tropopause remains constant. The layer has the largest concentration of nitrogen. Structure Composition The Earth's planetary atmosphere contains, besides other gases, water vapour and carbon dioxide, which produce carbonic acid in rain water, which therefore has an approximate natural pH of 5.0 to 5.5 (slightly acidic). (Water other than atmospheric water vapour fallen as fresh rain, such as fresh/sweet/potable/river water, will usually be affected by the physical environment and may not be in this pH range.) Atmospheric water vapour holds suspended gasses in it (not by mass),78.08% nitrogen as N2, 20.95% oxygen as O2, 0.93% argon, trace gases, and variable amounts of condensing water (from saturated water vapor). Any carbon dioxide released into the atmosphere from a pressurised source combines with the carbonic acid water vapour and momentarily reduces the atmospheric pH by negligible amounts. Respiration from animals releases out of equilibrium carbonic acid and low levels of other ions. Combustion of hydrocarbons which is not a chemical reaction releases to atmosphere carbonic acid water as; saturates, condensates, vapour or gas (invisible steam). Combustion can releases particulates (carbon/soot and ash) as well as molecules forming nitrites and sulphites which will reduce the atmospheric pH of the water slightly or harmfully in highly industrialised areas where this is classed as air pollution and can create the phenomena of acid rain, a pH lower than the natural pH5.56. The negative effects of the by-products of combustion released into the atmospheric vapour can be removed by the use of scrubber towers and other physical means, the captured pollutants can be processed into a valuable by-product. The sources of atmospheric water vapor are the bodies of water (oceans, seas, lakes, rivers, swamps), and vegetation on the planetary surface, which humidify the troposphere through the processes of evaporation and transpiration respectively, and which influences the occurrence of weather phenomena; the greatest proportion of water vapor is in the atmosphere nearest the surface of the Earth. The temperature of the troposphere decreases at high altitude by way of the inversion layers that occur in the tropopause, which is the atmospheric boundary that demarcates the troposphere from the stratosphere. At higher altitudes, the low air-temperature consequently decreases the saturation vapor pressure, the amount of atmospheric water vapor in the upper troposphere. Pressure The maximum air pressure (weight of the atmosphere) is at sea level and decreases at high altitude because the atmosphere is in hydrostatic equilibrium, wherein the air pressure is equal to the weight of the air above a given point on the planetary surface. The relation between decreased air pressure and high altitude can be equated to the density of a fluid, by way of the following hydrostatic equation: where: gn is the standard gravity ρ is the density z is the altitude P is the pressure R is the gas constant T is the thermodynamic (absolute) temperature m is the molar mass Temperature The planetary surface of the Earth heats the troposphere by means of latent heat, thermal radiation, and sensible heat. The gas layers of the troposphere are less dense at the geographic poles and denser at the equator, where the average height of the tropical troposphere is 13 km, approximately 7.0 km greater than the 6.0 km average height of the polar troposphere at the geographic poles; therefore, surplus heating and vertical expansion of the troposphere occur in the tropical latitudes. At the middle latitudes, tropospheric temperatures decrease from an average temperature of at sea level to approximately at the tropopause. At the equator, the tropospheric temperatures decrease from an average temperature of at sea level to approximately at the tropopause. At the geographical poles, the Arctic and the Antarctic regions, the tropospheric temperature decreases from an average temperature of at sea level to approximately at the tropopause. Altitude The temperature of the troposphere decreases with increased altitude, and the rate of decrease in air temperature is measured with the Environmental Lapse Rate () which is the numeric difference between the temperature of the planetary surface and the temperature of the tropopause divided by the altitude. Functionally, the ELR equation assumes that the planetary atmosphere is static, that there is no mixing of the layers of air, either by vertical atmospheric convection or winds that could create turbulence. The difference in temperature derives from the planetary surface absorbing most of the energy from the sun, which then radiates outwards and heats the troposphere (the first layer of the atmosphere of Earth) while the radiation of surface heat to the upper atmosphere results in the cooling of that layer of the atmosphere. The ELR equation also assumes that the atmosphere is static, but heated air becomes buoyant, expands, and rises. The dry adiabatic lapse rate (DALR) accounts for the effect of the expansion of dry air as it rises in the atmosphere, and the wet adiabatic lapse rate (WALR) includes the effect of the condensation-rate of water vapor upon the environmental lapse rate. Compression and expansion A parcel of air rises and expands because of the lower atmospheric pressure at high altitudes. The expansion of the air parcel pushes outwards against the surrounding air, and transfers energy (as work) from the parcel of air to the atmosphere. Transferring energy to a parcel of air by way of heat is a slow and inefficient exchange of energy with the environment, which is an adiabatic process (no energy transfer by way of heat). As the rising parcel of air loses energy while it acts upon the surrounding atmosphere, no heat energy is transferred from the atmosphere to the air parcel to compensate for the heat loss. The parcel of air loses energy as it reaches greater altitude, which is manifested as a decrease in the temperature of the air mass. Analogously, the reverse process occurs within a cold parcel of air that is being compressed and is sinking to the planetary surface. The compression and the expansion of an air parcel are reversible phenomena in which energy is not transferred into or out of the air parcel; atmospheric compression and expansion are measured as an isentropic process () wherein there occurs no change in entropy as the air parcel rises or falls within the atmosphere. Because the heat exchanged () is related to the change in entropy ( by ) the equation governing the air temperature as a function of altitude for a mixed atmosphere is: where is the entropy. The isentropic equation states that atmospheric entropy does not change with altitude; the adiabatic lapse rate measures the rate at which temperature decreases with altitude under such conditions. Humidity If the air contains water vapor, then cooling of the air can cause the water to condense, and the air no longer functions as an ideal gas. If the air is at the saturation vapor pressure, then the rate at which temperature decreases with altitude is called the saturated adiabatic lapse rate. The actual rate at which the temperature decreases with altitude is the environmental lapse rate. In the troposphere, the average environmental lapse rate is a decrease of about 6.5 °C for every 1.0 km (1,000m) of increased altitude. For dry air, an approximately ideal gas, the adiabatic equation is: wherein is the heat capacity ratio () for air. The combination of the equation for the air pressure yields the dry adiabatic lapse rate:. Environment The environmental lapse rate (), at which temperature decreases with altitude, usually is unequal to the adiabatic lapse rate (). If the upper air is warmer than predicted by the adiabatic lapse rate (), then a rising and expanding parcel of air will arrive at the new altitude at a lower temperature than the surrounding air. In which case, the air parcel is denser than the surrounding air, and so falls back to its original altitude as an air mass that is stable against being lifted. If the upper air is cooler than predicted by the adiabatic lapse rate, then, when the air parcel rises to a new altitude, the air mass will have a higher temperature and a lower density than the surrounding air and will continue to accelerate and rise. Tropopause The tropopause is the atmospheric boundary layer between the troposphere and the stratosphere, and is located by measuring the changes in temperature relative to increased altitude in the troposphere and in the stratosphere. In the troposphere, the temperature of the air decreases at high altitude, however, in the stratosphere the air temperature initially is constant, and then increases with altitude. The increase of air temperature at stratospheric altitudes results from the ozone layer's absorption and retention of the ultraviolet (UV) radiation that Earth receives from the Sun. The coldest layer of the atmosphere, where the temperature lapse rate changes from a positive rate (in the troposphere) to a negative rate (in the stratosphere) locates and identifies the tropopause as an inversion layer in which limited mixing of air layers occurs between the troposphere and the stratosphere. Atmospheric flow The general flow of the atmosphere is from west to east, which, however, can be interrupted by polar flows, either north-to-south flow or a south-to-north flow, which meteorology describes as a zonal flow and as a meridional flow. The terms are used to describe localized areas of the atmosphere at a synoptic scale; the three-cell model more fully explains the zonal and meridional flows of the planetary atmosphere of the Earth. Three-cell model The three-cell model of the atmosphere of the Earth describes the actual flow of the atmosphere with the tropical-latitude Hadley cell, the mid-latitude Ferrel cell, and the polar cell to describe the flow of energy and the circulation of the planetary atmosphere. Balance is the fundamental principle of the model — that the solar energy absorbed by the Earth in a year is equal to the energy radiated (lost) into outer space. The Earth's energy balance does not equally apply to each latitude because of the varying strength of the sunlight that strikes each of the three atmospheric cells, consequent to the inclination of the axis of planet Earth within its orbit of the Sun. The resultant atmospheric circulation transports warm tropical air to the geographic poles and cold polar air to the tropics. The effect of the three cells is the tendency to the equilibrium of heat and moisture in the planetary atmosphere of Earth. Zonal flow A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow. The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow. Meridional flow When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs of low pressure and ridges of high pressure, with more north–south flow in the general pattern than west-to-east flow.
Physical sciences
Atmosphere: General
Earth science
41831
https://en.wikipedia.org/wiki/Telephony
Telephony
Telephony ( ) is the field of technology involving the development, application, and deployment of telecommunications services for the purpose of electronic transmission of voice, fax, or data, between distant parties. The history of telephony is intimately linked to the invention and development of the telephone. Telephony is commonly referred to as the construction or operation of telephones and telephonic systems and as a system of telecommunications in which telephonic equipment is employed in the transmission of speech or other sound between points, with or without the use of wires. The term is also used frequently to refer to computer hardware, software, and computer network systems, that perform functions traditionally performed by telephone equipment. In this context the technology is specifically referred to as Internet telephony, or voice over Internet Protocol (VoIP). Overview The first telephones were connected directly in pairs. Each user had a separate telephone wired to each locations to be reached. This quickly became inconvenient and unmanageable when users wanted to communicate with more than a few people. The invention of the telephone exchange provided the solution for establishing telephone connections with any other telephone in service in the local area. Each telephone was connected to the exchange at first with one wire, later one wire pair, the local loop. Nearby exchanges in other service areas were connected with trunk lines, and long-distance service could be established by relaying the calls through multiple exchanges. Initially, exchange switchboards were manually operated by an attendant, commonly referred to as the "switchboard operator". When a customer cranked a handle on the telephone, it activated an indicator on the board in front of the operator, who would in response plug the operator headset into that jack and offer service. The caller had to ask for the called party by name, later by number, and the operator connected one end of a circuit into the called party jack to alert them. If the called station answered, the operator disconnected their headset and completed the station-to-station circuit. Trunk calls were made with the assistance of other operators at other exchangers in the network. Until the 1970s, most telephones were permanently wired to the telephone line installed at customer premises. Later, conversion to installation of jacks that terminated the inside wiring permitted simple exchange of telephone sets with telephone plugs and allowed portability of the set to multiple locations in the premises where jacks were installed. The inside wiring to all jacks was connected in one place to the wire drop which connects the building to a cable. Cables usually bring a large number of drop wires from all over a district access network to one wire center or telephone exchange. When a telephone user wants to make a telephone call, equipment at the exchange examines the dialed telephone number and connects that telephone line to another in the same wire center, or to a trunk to a distant exchange. Most of the exchanges in the world are interconnected through a system of larger switching systems, forming the public switched telephone network (PSTN). In the second half of the 20th century, fax and data became important secondary applications of the network created to carry voices, and late in the century, parts of the network were upgraded with ISDN and DSL to improve handling of such traffic. Today, telephony uses digital technology (digital telephony) in the provisioning of telephone services and systems. Telephone calls can be provided digitally, but may be restricted to cases in which the last mile is digital, or where the conversion between digital and analog signals takes place inside the telephone. This advancement has reduced costs in communication, and improved the quality of voice services. The first implementation of this, ISDN, permitted all data transport from end-to-end speedily over telephone lines. This service was later made much less important due to the ability to provide digital services based on the Internet protocol suite. Since the advent of personal computer technology in the 1980s, computer telephony integration (CTI) has progressively provided more sophisticated telephony services, initiated and controlled by the computer, such as making and receiving voice, fax, and data calls with telephone directory services and caller identification. The integration of telephony software and computer systems is a major development in the evolution of office automation. The term is used in describing the computerized services of call centers, such as those that direct your phone call to the right department at a business you're calling. It is also sometimes used for the ability to use your personal computer to initiate and manage phone calls (in which case you can think of your computer as your personal call center). Digital telephony Digital telephony is the use of digital electronics in the operation and provisioning of telephony systems and services. Since the late 20th century, a digital core network has replaced the traditional analog transmission and signaling systems, and much of the access network has also been digitized. Starting with the development of transistor technology, originating from Bell Telephone Laboratories in 1947, to amplification and switching circuits in the 1950s, the public switched telephone network (PSTN) has gradually moved towards solid-state electronics and automation. Following the development of computer-based electronic switching systems incorporating metal–oxide–semiconductor (MOS) and pulse-code modulation (PCM) technologies, the PSTN gradually evolved towards the digitization of signaling and audio transmissions. Digital telephony has since dramatically improved the capacity, quality and cost of the network. Digitization allows wideband voice on the same channel, with improved quality of a wider analog voice channel. History The earliest end-to-end analog telephone networks to be modified and upgraded to transmission networks with Digital Signal 1 (DS1/T1) carrier systems date back to the early 1960s. They were designed to support the basic 3 kHz voice channel by sampling the bandwidth-limited analog voice signal and encoding using pulse-code modulation (PCM). Early PCM codec-filters were implemented as passive resistorcapacitorinductor filter circuits, with analog-to-digital conversion (for digitizing voices) and digital-to-analog conversion (for reconstructing voices) handled by discrete devices. Early digital telephony was impractical due to the low performance and high costs of early PCM codec-filters. Practical digital telecommunication was enabled by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET), which led to the rapid development and wide adoption of PCM digital telephony. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. MOS technology was initially overlooked by Bell because they did not find it practical for analog telephone applications, before it was commercialized by Fairchild and RCA for digital electronics such as computers. MOS technology eventually became practical for telephone applications with the MOS mixed-signal integrated circuit, which combines analog and digital signal processing on a single chip, developed by former Bell engineer David A. Hodges with Paul R. Gray at UC Berkeley in the early 1970s. In 1974, Hodges and Gray worked with R.E. Suarez to develop MOS switched capacitor (SC) circuit technology, which they used to develop a digital-to-analog converter (DAC) chip, using MOS capacitors and MOSFET switches for data conversion. MOS analog-to-digital converter (ADC) and DAC chips were commercialized by 1974. MOS SC circuits led to the development of PCM codec-filter chips in the late 1970s. The silicon-gate CMOS (complementary MOS) PCM codec-filter chip, developed by Hodges and W.C. Black in 1980, has since been the industry standard for digital telephony. By the 1990s, telecommunication networks such as the public switched telephone network (PSTN) had been largely digitized with very-large-scale integration (VLSI) CMOS PCM codec-filters, widely used in electronic switching systems for telephone exchanges, private branch exchanges (PBX) and key telephone systems (KTS); user-end modems; data transmission applications such as digital loop carriers, pair gain multiplexers, telephone loop extenders, integrated services digital network (ISDN) terminals, digital cordless telephones and digital cell phones; and applications such as speech recognition equipment, voice data storage, voice mail and digital tapeless answering machines. The bandwidth of digital telecommunication networks has been rapidly increasing at an exponential rate, as observed by Edholm's law, largely driven by the rapid scaling and miniaturization of MOS technology. Uncompressed PCM digital audio with 8-bit depth and 8kHz sample rate requires a bit rate of 64kbit/s, which was impractical for early digital telecommunication networks with limited network bandwidth. A solution to this issue was linear predictive coding (LPC), a speech coding data compression algorithm that was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. LPC was capable of audio data compression down to 2.4kbit/s, leading to the first successful real-time conversations over digital networks in the 1970s. LPC has since been the most widely used speech coding method. Another audio data compression method, a discrete cosine transform (DCT) algorithm called the modified discrete cosine transform (MDCT), has been widely adopted for speech coding in voice-over-IP (VoIP) applications since the late 1990s. The development of transmission methods such as SONET and fiber optic transmission further advanced digital transmission. Although analog carrier systems existed that multiplexed multiple analog voice channels onto a single transmission medium, digital transmission allowed lower cost and more channels multiplexed on the transmission medium. Today the end instrument often remains analog but the analog signals are typically converted to digital signals at the serving area interface (SAI), central office (CO), or other aggregation point. Digital loop carriers (DLC) and fiber to the x place the digital network ever closer to the customer premises, relegating the analog local loop to legacy status. IP telephony The field of technology available for telephony has broadened with the advent of new communication technologies. Telephony now includes the technologies of Internet services and mobile communication, including video conferencing. The new technologies based on Internet Protocol (IP) concepts are often referred to separately as voice over IP (VoIP) telephony, also commonly referred to as IP telephony or Internet telephony. Unlike traditional phone service, IP telephony service is relatively unregulated by government. In the United States, the Federal Communications Commission (FCC) regulates phone-to-phone connections, but says they do not plan to regulate connections between a phone user and an IP telephony service provider. A specialization of digital telephony, Internet Protocol (IP) telephony involves the application of digital networking technology that was the foundation to the Internet to create, transmit, and receive telecommunications sessions over computer networks. Internet telephony is commonly known as voice over Internet Protocol (VoIP), reflecting the principle, but it has been referred with many other terms. VoIP has proven to be a disruptive technology that is rapidly replacing traditional telephone infrastructure technologies. As of January 2005, up to 10% of telephone subscribers in Japan and South Korea have switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing". As of 2006, many VoIP companies offer service to consumers and businesses. A significant advancement in mobile telephony has been the integration of IP technologies into mobile networks, notably through Voice over LTE (VoLTE) and Voice over 5G (Vo5G). These technologies enable voice calls to be transmitted over the same IP-based infrastructure used for data services, offering improved call quality and faster connections compared to traditional circuit-switched networks. VoLTE and Vo5G are becoming the standard for mobile voice communication in many regions, as mobile operators transition to all-IP networks. IP telephony uses an Internet connection and hardware IP phones, analog telephone adapters, or softphone computer applications to transmit conversations encoded as data packets. While one of the most common and cost-effective uses of IP telephony is through connections over WiFi hotspots, it is also employed on private networks and over other types of Internet connections, which may or may not have a direct link to the global telephone network. Social impact research Direct person-to-person communication includes non-verbal cues expressed in facial and other bodily articulation, that cannot be transmitted in traditional voice telephony. Video telephony restores such interactions to varying degrees. Social Context Cues Theory is a model to measure the success of different types of communication in maintaining the non-verbal cues present in face-to-face interactions. The research examines many different cues, such as the physical context, different facial expressions, body movements, tone of voice, touch and smell. Various communication cues are lost with the usage of the telephone. The communicating parties are not able to identify the body movements, and lack touch and smell. Although this diminished ability to identify social cues is well known, Wiesenfeld, Raghuram, and Garud point out that there is a value and efficiency to the type of communication for different tasks. They examine work places in which different types of communication, such as the telephone, are more useful than face-to-face interaction. The expansion of communication to mobile telephone service has created a different filter of the social cues than the land-line telephone. The use of instant messaging, such as texting, on mobile telephones has created a sense of community. In The Social Construction of Mobile Telephony it is suggested that each phone call and text message is more than an attempt to converse. Instead, it is a gesture which maintains the social network between family and friends. Although there is a loss of certain social cues through telephones, mobile phones bring new forms of expression of different cues that are understood by different audiences. New language additives attempt to compensate for the inherent lack of non-physical interaction. Another social theory supported through telephony is the Media Dependency Theory. This theory concludes that people use media or a resource to attain certain goals. This theory states that there is a link between the media, audience, and the large social system. Telephones, depending on the person, help attain certain goals like accessing information, keeping in contact with others, sending quick communication, entertainment, etc.
Technology
Telecommunications
null
41863
https://en.wikipedia.org/wiki/Waveguide
Waveguide
A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves. Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law. There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining. The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances. Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape. Uses The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves guided through a taut wire have been known for a long time, as well as sound through a hollow pipe such as a cave or medical stethoscope. Other uses of waveguides are in transmitting power between the components of a system such as radio, radar or optical devices. Waveguides are the fundamental principle of guided wave testing (GWT), one of the many methods of non-destructive evaluation. Specific examples: Optical fibers transmit light and signals for long distances with low attenuation and a wide usable range of wavelengths. In a microwave oven a waveguide transfers power from the magnetron, where waves are formed, to the cooking chamber. In a radar, a waveguide transfers radio frequency energy to and from the antenna, where the impedance needs to be matched for efficient power transmission (see below). Rectangular and circular waveguides are commonly used to connect feeds of parabolic dishes to their electronics, either low-noise receivers or power amplifier/transmitters. Waveguides are used in scientific instruments to measure optical, acoustic and elastic properties of materials and objects. The waveguide can be put in contact with the specimen (as in a medical ultrasonography), in which case the waveguide ensures that the power of the testing wave is conserved, or the specimen may be put inside the waveguide (as in a dielectric constant measurement, so that smaller objects can be tested and the accuracy is better. A transmission line is a commonly used specific type of waveguide. History The first structure for guiding waves was proposed by J. J. Thomson in 1893, and was first experimentally tested by Oliver Lodge in 1894. The first mathematical analysis of electromagnetic waves in a metal cylinder was performed by Lord Rayleigh in 1897. For sound waves, Lord Rayleigh published a full mathematical analysis of propagation modes in his seminal work, "The Theory of Sound". Jagadish Chandra Bose researched millimeter wavelengths using waveguides, and in 1897 described to the Royal Institution in London his research carried out in Kolkata. The study of dielectric waveguides (such as optical fibers, see below) began as early as the 1920s, by several people, most famous of which are Rayleigh, Sommerfeld and Debye. Optical fiber began to receive special attention in the 1960s due to its importance to the communications industry. The development of radio communication initially occurred at the lower frequencies because these could be more easily propagated over large distances. The long wavelengths made these frequencies unsuitable for use in hollow metal waveguides because of the impractically large diameter tubes required. Consequently, research into hollow metal waveguides stalled and the work of Lord Rayleigh was forgotten for a time and had to be rediscovered by others. Practical investigations resumed in the 1930s by George C. Southworth at Bell Labs and Wilmer L. Barrow at MIT. Southworth at first took the theory from papers on waves in dielectric rods because the work of Lord Rayleigh was unknown to him. This misled him somewhat; some of his experiments failed because he was not aware of the phenomenon of waveguide cutoff frequency already found in Lord Rayleigh's work. Serious theoretical work was taken up by John R. Carson and Sallie P. Mead. This work led to the discovery that for the TE01 mode in circular waveguide losses go down with frequency and at one time this was a serious contender for the format for long-distance telecommunications. The importance of radar in World War II gave a great impetus to waveguide research, at least on the Allied side. The magnetron, developed in 1940 by John Randall and Harry Boot at the University of Birmingham in the United Kingdom, provided a good power source and made microwave radar feasible. The most important centre of US research was at the Radiation Laboratory (Rad Lab) at MIT but many others took part in the US, and in the UK such as the Telecommunications Research Establishment. The head of the Fundamental Development Group at Rad Lab was Edward Mills Purcell. His researchers included Julian Schwinger, Nathan Marcuvitz, Carol Gray Montgomery, and Robert H. Dicke. Much of the Rad Lab work concentrated on finding lumped element models of waveguide structures so that components in waveguide could be analysed with standard circuit theory. Hans Bethe was also briefly at Rad Lab, but while there he produced his small aperture theory which proved important for waveguide cavity filters, first developed at Rad Lab. The German side, on the other hand, largely ignored the potential of waveguides in radar until very late in the war. So much so that when radar parts from a downed British plane were sent to Siemens & Halske for analysis, even though they were recognised as microwave components, their purpose could not be identified. German academics were even allowed to continue publicly publishing their research in this field because it was not felt to be important. Immediately after World War II waveguide was the technology of choice in the microwave field. However, it has some problems; it is bulky, expensive to produce, and the cutoff frequency effect makes it difficult to produce wideband devices. Ridged waveguide can increase bandwidth beyond an octave, but a better solution is to use a technology working in TEM mode (that is, non-waveguide) such as coaxial conductors since TEM does not have a cutoff frequency. A shielded rectangular conductor can also be used and this has certain manufacturing advantages over coax and can be seen as the forerunner of the planar technologies (stripline and microstrip). However, planar technologies really started to take off when printed circuits were introduced. These methods are significantly cheaper than waveguide and have largely taken its place in most bands. However, waveguide is still favoured in the higher microwave bands from around Ku band upwards. Properties Propagation modes and cutoff frequencies A propagation mode in a waveguide is one solution of the wave equations, or, in other words, the form of the wave. Due to the constraints of the boundary conditions, there are only limited frequencies and forms for the wave function which can propagate in the waveguide. The lowest frequency in which a certain mode can propagate is the cutoff frequency of that mode. The mode with the lowest cutoff frequency is the fundamental mode of the waveguide, and its cutoff frequency is the waveguide cutoff frequency. Propagation modes are computed by solving the Helmholtz equation alongside a set of boundary conditions depending on the geometrical shape and materials bounding the region. The usual assumption for infinitely long uniform waveguides allows us to assume a propagating form for the wave, i.e. stating that every field component has a known dependency on the propagation direction (i.e. ). More specifically, the common approach is to first replace all unknown time-varying fields (assuming for simplicity to describe the fields in cartesian components) with their complex phasors representation , sufficient to fully describe any infinitely long single-tone signal at frequency , (angular frequency ), and rewrite the Helmholtz equation and boundary conditions accordingly. Then, every unknown field is forced to have a form like , where the term represents the propagation constant (still unknown) along the direction along which the waveguide extends to infinity. The Helmholtz equation can be rewritten to accommodate such form and the resulting equality needs to be solved for and , yielding in the end an eigenvalue equation for and a corresponding eigenfunction for each solution of the former. The propagation constant of the guided wave is complex, in general. For a lossless case, the propagation constant might be found to take on either real or imaginary values, depending on the chosen solution of the eigenvalue equation and on the angular frequency . When is purely real, the mode is said to be "below cutoff", since the amplitude of the field phasors tends to exponentially decrease with propagation; an imaginary , instead, represents modes said to be "in propagation" or "above cutoff", as the complex amplitude of the phasors does not change with . Impedance matching In circuit theory, the impedance is a generalization of electrical resistance in the case of alternating current, and is measured in ohms (). A waveguide in circuit theory is described by a transmission line having a length and characteristic impedance. In other words, the impedance indicates the ratio of voltage to current of the circuit component (in this case a waveguide) during propagation of the wave. This description of the waveguide was originally intended for alternating current, but is also suitable for electromagnetic and sound waves, once the wave and material properties (such as pressure, density, dielectric constant) are properly converted into electrical terms (current and impedance for example). Impedance matching is important when components of an electric circuit are connected (waveguide to antenna for example): The impedance ratio determines how much of the wave is transmitted forward and how much is reflected. In connecting a waveguide to an antenna a complete transmission is usually required, so an effort is made to match their impedances. The reflection coefficient can be calculated using: , where (Gamma) is the reflection coefficient (0 denotes full transmission, 1 full reflection, and 0.5 is a reflection of half the incoming voltage), and are the impedance of the first component (from which the wave enters) and the second component, respectively. An impedance mismatch creates a reflected wave, which added to the incoming waves creates a standing wave. An impedance mismatch can be also quantified with the standing wave ratio (SWR or VSWR for voltage), which is connected to the impedance ratio and reflection coefficient by: , where are the minimum and maximum values of the voltage absolute value, and the VSWR is the voltage standing wave ratio, which value of 1 denotes full transmission, without reflection and thus no standing wave, while very large values mean high reflection and standing wave pattern. Electromagnetic waveguides Radio-frequency waveguides Waveguides can be constructed to carry waves over a wide portion of the electromagnetic spectrum, but are especially useful in the microwave and optical frequency ranges. Depending on the frequency, they can be constructed from either conductive or dielectric materials. Waveguides are used for transferring both power and communication signals. Optical waveguides Waveguides used at optical frequencies are typically dielectric waveguides, structures in which a dielectric material with high permittivity, and thus high index of refraction, is surrounded by a material with lower permittivity. The structure guides optical waves by total internal reflection. An example of an optical waveguide is optical fiber. Other types of optical waveguide are also used, including photonic-crystal fiber, which guides waves by any of several distinct mechanisms. Guides in the form of a hollow tube with a highly reflective inner surface have also been used as light pipes for illumination applications. The inner surfaces may be polished metal, or may be covered with a multilayer film that guides light by Bragg reflection (this is a special case of a photonic-crystal fiber). One can also use small prisms around the pipe which reflect light via total internal reflection —such confinement is necessarily imperfect, however, since total internal reflection can never truly guide light within a lower-index core (in the prism case, some light leaks out at the prism corners). Acoustic waveguides An acoustic waveguide is a physical structure for guiding sound waves. Sound in an acoustic waveguide behaves like electromagnetic waves on a transmission line. Waves on a string, like the ones in a tin can telephone, are a simple example of an acoustic waveguide. Another example are pressure waves in the pipes of an organ. The term acoustic waveguide is also used to describe elastic waves guided in micro-scale devices, like those employed in piezoelectric delay lines and in stimulated Brillouin scattering. Mathematical waveguides Waveguides are interesting objects of study from a strictly mathematical perspective. A waveguide (or tube) is defined as type of boundary condition on the wave equation such that the wave function must be equal to zero on the boundary and that the allowed region is finite in all dimensions but one (an infinitely long cylinder is an example.) A large number of interesting results can be proven from these general conditions. It turns out that any tube with a bulge (where the width of the tube increases) admits at least one bound state that exist inside the mode gaps. The frequencies of all the bound states can be identified by using a pulse short in time. This can be shown using the variational principles. An interesting result by Jeffrey Goldstone and Robert Jaffe is that any tube of constant width with a twist, admits a bound state. Sound synthesis Sound synthesis uses digital delay lines as computational elements to simulate wave propagation in tubes of wind instruments and the vibrating strings of string instruments.
Technology
Components
null
41890
https://en.wikipedia.org/wiki/Group%20theory
Group theory
In abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography. The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups. History Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry. Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory. The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups. Main classes of groups The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations. Permutation groups The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself () by means of the left regular representation. In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for , the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals. Matrix groups The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G. Transformation groups Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure. The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous. Abstract groups Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group began to take hold, where "abstract" means that the nature of the elements are ignored in such a way that two isomorphic groups are considered as the same group. A typical way of specifying an abstract group is through a presentation by generators and relations, A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy. The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school. Groups with additional structure An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion), are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group. The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients. Branches of group theory Finite group theory During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known. During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields. Finite groups often occur when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry. Representation of groups Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism: where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that for any h in G. This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma). Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions. Lie theory A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3. Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations. Combinatorial and geometric group theory Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications . A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators , the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by For example, the group presentation describes a group which is isomorphic to A string consisting of generator symbols and their inverses is called a word. Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free. There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation is isomorphic to the additive group Z of integers, although this may not be immediately apparent. (Writing , one has ) Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X. Connection of groups and symmetry Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X. If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example. Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation has the two solutions and . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots. The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative. Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object. The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question. Applications of group theory Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities. Galois theory Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory. Algebraic topology Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory. Algebraic geometry Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example the Hodge conjecture (in certain cases).) The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities. Algebraic number theory Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula, captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem. Harmonic analysis Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques. Combinatorics In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma. Music The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group. Physics In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group. Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution. Chemistry and materials science In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals. Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule. In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, where n is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, , since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cn axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF3), the highest order of rotation axis is C3, so the principal axis of rotation is C3. In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is called σh (horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd). Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example, methane and other tetrahedral molecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of  360°/n, followed by reflection through a plane perpendicular to the axis of rotation. Cryptography Very large groups of prime order constructed in elliptic curve cryptography serve for public-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group.
Mathematics
Algebra
null
41932
https://en.wikipedia.org/wiki/Accuracy%20and%20precision
Accuracy and precision
Accuracy and precision are two measures of observational error. Accuracy is how close a given set of measurements (observations or readings) are to their true value. Precision is how close the measurements are to each other. The International Organization for Standardization (ISO) defines a related measure: trueness, "the closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value." While precision is a description of random errors (a measure of statistical variability), accuracy has two different definitions: More commonly, a description of systematic errors (a measure of statistical bias of a given measure of central tendency, such as the mean). In this definition of "accuracy", the concept is independent of "precision", so a particular set of data can be said to be accurate, precise, both, or neither. This concept corresponds to ISO's trueness. A combination of both precision and trueness, accounting for the two types of observational error (random and systematic), so that high accuracy requires both high precision and high trueness. This usage corresponds to ISO's definition of accuracy (trueness and precision). Common technical definition In simpler terms, given a statistical sample or set of data points from repeated measurements of the same quantity, the sample or set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if their standard deviation is relatively small. In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method. The field of statistics, where the interpretation of measurements plays a central role, prefers to use the terms bias and variability instead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision. A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is considered valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability). The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. In numerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits. In military terms, accuracy refers primarily to the accuracy of fire (justesse de tir), the precision of fire expressed by the closeness of a grouping of shots at and around the centre of the target. A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards in 1994, which is also reflected in the 2008 issue of the BIPM International Vocabulary of Metrology (VIM), items 2.13 and 2.14. According to ISO 5725-1, the general term "accuracy" is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the same measurand, it involves a component of random error and a component of systematic error. In this case trueness is the closeness of the mean of a set of measurement results to the actual (true) value, that is the systematic error, and precision is the closeness of agreement among a set of results, that is the random error. ISO 5725-1 and VIM also avoid the use of the term "bias", previously specified in BS 5497-1, because it has different connotations outside the fields of science and engineering, as in medicine and law. Quantification and applications In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions. Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units (abbreviated SI from French: Système international d'unités) and maintained by national standards organizations such as the National Institute of Standards and Technology in the United States. This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration. the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103 m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103 m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000. Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10 m, meaning a range of between 7.54375 and 7.54421 × 10−10 m. Precision includes: repeatability — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and reproducibility — the variation arising using the same measurement process among different instruments and operators, and over longer time periods. In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within. For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data. In classification In binary classification Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (both true positives and true negatives) among the total number of cases examined. As such, it compares estimates of pre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index". It is a parameter of the test. The formula for quantifying binary accuracy is: where ; ; ; In this context, the concepts of trueness and precision as defined by ISO 5725-1 are not applicable. One reason is that there is not a single “true value” of a quantity, but rather two possible true values for every case, while accuracy is an average across all cases and therefore takes into account both values. However, the term precision is used in this context to mean a different metric originating from the field of information retrieval (see below). In multiclass classification When computing accuracy in multiclass classification, accuracy is simply the fraction of correct classifications: This is usually expressed as a percentage. For example, if a classifier makes ten predictions and nine of them are correct, the accuracy is 90%. Accuracy is sometimes also viewed as a micro metric, to underline that it tends to be greatly affected by the particular class prevalence in a dataset and the classifier's biases. Furthermore, it is also called top-1 accuracy to distinguish it from top-5 accuracy, common in convolutional neural network evaluation. To evaluate top-5 accuracy, the classifier must provide relative likelihoods for each class. When these are sorted, a classification is considered correct if the correct classification falls anywhere within the top 5 predictions made by the network. Top-5 accuracy was popularized by the ImageNet challenge. It is usually higher than top-1 accuracy, as any correct predictions in the 2nd through 5th positions will not improve the top-1 score, but do improve the top-5 score. In psychometrics and psychophysics In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's alpha to ensure sets of related questions have related responses, and then comparison of those related question between reference and target population. In logic simulation In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality. In information systems Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. In cognitive systems In cognitive systems, accuracy and precision is used to characterize and measure results of a cognitive process performed by biological or artificial entities where a cognitive process is a transformation of data, information, knowledge, or wisdom to a higher-valued form. (DIKW Pyramid) Sometimes, a cognitive process produces exactly the intended or desired output but sometimes produces output far from the intended or desired. Furthermore, repetitions of a cognitive process do not always produce the same output. Cognitive accuracy (CA) is the propensity of a cognitive process to produce the intended or desired output. Cognitive precision (CP) is the propensity of a cognitive process to produce the same output. To measure augmented cognition in human/cog ensembles, where one or more humans work collaboratively with one or more cognitive systems (cogs), increases in cognitive accuracy and cognitive precision assist in measuring the degree of cognitive augmentation.
Physical sciences
Measurement: General
null
41957
https://en.wikipedia.org/wiki/Electrical%20impedance
Electrical impedance
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit. Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of the sinusoidal voltage between its terminals, to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage. Impedance extends the concept of resistance to alternating current (AC) circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. Impedance can be represented as a complex number, with the same units as resistance, for which the SI unit is the ohm (). Its symbol is usually , and it may be represented by writing its magnitude and phase in the polar form . However, Cartesian complex number representation is often more powerful for circuit analysis purposes. The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law. In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix. The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho. Instruments used to measure the electrical impedance are called impedance analyzers. History Perhaps the earliest use of complex numbers in circuit analysis was by Johann Victor Wietlisbach in 1879 in analysing the Maxwell bridge. Wietlisbach avoided using differential equations by expressing AC currents and voltages as exponential functions with imaginary exponents (see ). Wietlisbach found the required voltage was given by multiplying the current by a complex number (impedance), although he did not identify this as a general parameter in its own right. The term impedance was coined by Oliver Heaviside in July 1886. Heaviside recognised that the "resistance operator" (impedance) in his operational calculus was a complex number. In 1887 he showed that there was an AC equivalent to Ohm's law. Arthur Kennelly published an influential paper on impedance in 1893. Kennelly arrived at a complex number representation in a rather more direct way than using imaginary exponential functions. Kennelly followed the graphical representation of impedance (showing resistance, reactance, and impedance as the lengths of the sides of a right angle triangle) developed by John Ambrose Fleming in 1889. Impedances could thus be added vectorially. Kennelly realised that this graphical representation of impedance was directly analogous to graphical representation of complex numbers (Argand diagram). Problems in impedance calculation could thus be approached algebraically with a complex number representation. Later that same year, Kennelly's work was generalised to all AC circuits by Charles Proteus Steinmetz. Steinmetz not only represented impedances by complex numbers but also voltages and currents. Unlike Kennelly, Steinmetz was thus able to express AC equivalents of DC laws such as Ohm's and Kirchhoff's laws. Steinmetz's work was highly influential in spreading the technique amongst engineers. Introduction In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. Complex impedance The impedance of a two-terminal circuit element is represented as a complex quantity . The polar form conveniently captures both magnitude and phase characteristics as where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument (commonly given the symbol ) gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current. In Cartesian form, impedance is defined as where the real part of impedance is the resistance and the imaginary part is the reactance . Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers. Complex voltage and current To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and . The impedance of a bipolar circuit is defined as the ratio of these quantities: Hence, denoting , we have The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship. Validity of complex representation This representation using complex exponentials may be justified by noting that (by Euler's formula): The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term. The results are identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that Ohm's law The meaning of electrical impedance can be understood by substituting it into Ohm's law. Assuming a two-terminal circuit element with impedance is driven by a sinusoidal voltage or current as above, there holds The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal). Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin's theorem and Norton's theorem, can also be extended to AC circuits by replacing resistance with impedance. Phasors A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids (such as in AC circuits), where they can often reduce a differential equation problem to an algebraic one. The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel. Device examples Resistor The impedance of an ideal resistor is purely real and is called resistive impedance: In this case, the voltage and current waveforms are proportional and in phase. Inductor and capacitor Ideal inductors and capacitors have a purely imaginary reactive impedance: the impedance of inductors increases as frequency increases; the impedance of capacitors decreases as frequency increases; In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading. Note the following identities for the imaginary unit and its reciprocal: Thus the inductor and capacitor impedance equations can be rewritten in polar form: The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship. Deriving the device-specific impedances What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis. Resistor For a resistor, there is the relation which is Ohm's law. Considering the voltage signal to be it follows that This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees. This result is commonly expressed as Capacitor For a capacitor, there is the relation: Considering the voltage signal to be it follows that and thus, as previously, Conversely, if the current through the circuit is assumed to be sinusoidal, its complex representation being then integrating the differential equation leads to The Const term represents a fixed potential bias superimposed to the AC sinusoidal potential, that plays no role in AC analysis. For this purpose, this term can be assumed to be 0, hence again the impedance Inductor For the inductor, we have the relation (from Faraday's law): This time, considering the current signal to be: it follows that: This result is commonly expressed in polar form as or, using Euler's formula, as As in the case of capacitors, it is also possible to derive this formula directly from the complex representations of the voltages and currents, or by assuming a sinusoidal voltage between the two poles of the inductor. In the latter case, integrating the differential equation above leads to a constant term for the current, that represents a fixed DC bias flowing through the inductor. This is set to zero because AC analysis using frequency domain impedance considers one frequency at a time and DC represents a separate frequency of zero hertz in this context. Generalised s-plane impedance Impedance defined in terms of jω can strictly be applied only to circuits that are driven with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows: For a DC circuit, this simplifies to . For a steady-state sinusoidal AC signal . Formal derivation The impedance of an electrical component is defined as the ratio between the Laplace transforms of the voltage over it and the current through it, i.e. where is the complex Laplace parameter. As an example, according to the I-V-law of a capacitor, , from which it follows that . In the phasor regime (steady-state AC, meaning all signals are represented mathematically as simple complex exponentials and oscillating at a common frequency ), impedance can simply be calculated as the voltage-to-current ratio, in which the common time-dependent factor cancels out: Again, for a capacitor, one gets that , and hence . The phasor domain is sometimes dubbed the frequency domain, although it lacks one of the dimensions of the Laplace parameter. For steady-state AC, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular: The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude; The phase of the complex impedance is the phase shift by which the current lags the voltage. These two relationships hold even after taking the real part of the complex exponentials (see phasors), which is the part of the signal one actually measures in real-life circuits. Resistance vs reactance Resistance and reactance together determine the magnitude and phase of the impedance through the following relations: In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant. Resistance Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current. Reactance Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it. A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance does not dissipate any power. Capacitive reactance A capacitor has a purely reactive impedance that is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric. The minus sign indicates that the imaginary part of the impedance is negative. At low frequencies, a capacitor approaches an open circuit so no current flows through it. A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero. Driven by an AC supply, a capacitor accumulates only a limited charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge accumulates and the smaller the opposition to the current. Inductive reactance Inductive reactance is proportional to the signal frequency and the inductance . An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop. For an inductor consisting of a coil with loops this gives: The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency. Total reactance The total reactance is given by ( is negative) so that the total impedance is Combining impedances The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those for combining resistances, except that the numbers in general are complex numbers. The general case, however, requires equivalent impedance transforms in addition to series and parallel. Series combination For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances. Or explicitly in real and imaginary terms: Parallel combination For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances. Hence the inverse total impedance is the sum of the inverses of the component impedances: or, when n = 2: The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance . Measurement The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and other fields. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example, in a radio antenna, the standing wave ratio or reflection coefficient may be more useful than the impedance alone. The measurement of impedance requires the measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device. The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude. The use of an impulse response may be used in combination with the fast Fourier transform (FFT) to rapidly measure the electrical impedance of various electrical devices. The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values, the impedance at any frequency can be calculated. Example Consider an LC tank circuit. The complex impedance of the circuit is It is immediately seen that the value of is minimal (actually equal to 0 in this case) whenever Therefore, the fundamental resonance angular frequency is Variable impedance In general, neither impedance nor admittance can vary with time, since they are defined for complex exponentials in which . If the complex exponential voltage to current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many components and systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage to current ratios that seem to be linear time-invariant (LTI) for small signals and over small observation windows, so they can be roughly described as if they had a time-varying impedance. This description is an approximation: Over large signal swings or wide observation windows, the voltage to current relationship will not be LTI and cannot be described by impedance.
Physical sciences
Electrical circuits
null
41958
https://en.wikipedia.org/wiki/Lidar
Lidar
Lidar (, also LIDAR, LiDAR or LADAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. Lidar may operate in a fixed direction (e.g., vertical) or it may scan multiple directions, in which case it is known as lidar scanning or 3D laser scanning, a special combination of 3-D scanning and laser scanning. Lidar has terrestrial, airborne, and mobile applications. Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swathe mapping (ALSM), and laser altimetry. It is used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has also been increasingly used in control and navigation for autonomous cars and for the helicopter Ingenuity on its record-setting flights over the terrain of Mars. The evolution of quantum technology has given rise to the emergence of Quantum Lidar, demonstrating higher efficiency and sensitivity when compared to conventional lidar systems. History and etymology Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging", derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems. The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963, which had a range of 11 km and an accuracy of 4.5 m, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests that it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the Moon by 'lidar' (light radar) ..." The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar. Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the Moon. Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar. General description Lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at resolution or better. The essential concept of lidar was originated by E. H. Synge in 1930, who envisaged the use of powerful searchlights to probe the atmosphere. Indeed, lidar has since been used extensively for atmospheric research and meteorology. Lidar instruments fitted to aircraft and satellites carry out surveying and mapping a recent example being the U.S. Geological Survey Experimental Advanced Airborne Research Lidar. NASA has identified lidar as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar-landing vehicles. Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 nanometers (ultraviolet). Typically, light is reflected via backscattering, as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal. The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components. Technology Mathematical formula A lidar determines the distance of an object or a surface with the formula: where c is the speed of light, d is the distance between the detector and the object or surface being detected, and t is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector. Design The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers. Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.). Components Lidar systems consist of several major components. Laser 600–1,000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut-off system which turns the laser off at specific altitudes is used in order to make it eye-safe for the people on the ground. One common alternative, 1,550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye. A trade-off though is that current detector technology is less advanced, so these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1,550 nm is not visible in night vision goggles, unlike the shorter 1,000 nm infrared laser. Airborne topographic mapping lidars generally use 1,064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1,064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth. Phased arrays A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction. Phased arrays have been used in radar since the 1940s. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. To achieve this the phase of each individual antenna (emitter) are precisely controlled. It is very difficult, if possible at all, to use the same technique in a lidar. The main problems are that all individual emitters must be coherent (technically coming from the same "master" oscillator or laser source), have dimensions about the wavelength of the emitted light (1 micron range) to act as a point source with their phases being controlled with high accuracy. Several companies are working on developing commercial solid-state lidar units but these units utilize a different principle described in a Flash Lidar below. Microelectromechanical machines Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration. Scanner and optics Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror, and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal. Photodetector and receiver electronics Two main photodetector technologies are used in lidar: solid-state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design. Position and navigation systems Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU). Sensor Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash lidar allows for 3-D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion. This results in less distortion. 3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology. Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter. A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array. In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at over Port-au-Prince was able to capture instantaneous snapshots of squares of the city at a resolution of , displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. New technologies for infrared single-photon counting LIDAR are advancing rapidly, including arrays and cameras in a variety of semiconductor and superconducting platforms. Flash lidar In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumination method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3-D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1-D or a 2-D sensor array, each pixel of which collects 3-D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors. Flash lidar is especially advantageous, when compared to scanning lidar, when the camera, scene, or both are moving, since the entire scene is illuminated at the same time. With scanning lidar, motion can cause "jitter" from the lapse in time as the laser rasters over the scene. As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3-D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios. Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications. Classification Based on orientation Lidar can be oriented to nadir, zenith, or laterally. For example, lidar altimeters look down, an atmospheric lidar looks up, and lidar-based collision avoidance systems are side-looking. Based on scanning mechanism Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector. Based on platform Lidar applications can be divided into airborne and terrestrial types. The two types require scanners with varying specifications based on the data's purpose, the size of the area to be captured, the range of measurement desired, the cost of equipment, and more. Spaceborne platforms are also possible, see satellite laser altimetry. Airborne Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a 3-D point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water. The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles. Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc. Multiple commercial lidar systems for unmanned aerial vehicles are currently on the market. These platforms can systematically scan large areas, or provide a cheaper alternative to manned aircraft for smaller scanning operations. Airborne lidar bathymetry The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions. Water depth measurable by lidar depends on the clarity of the water and the absorption of the wavelength used. Water is most transparent to green and blue light, so these will penetrate deepest in clean water. Blue-green light of 532 nm produced by frequency doubled solid-state IR laser output is the standard for airborne bathymetry. This light can penetrate water but pulse strength attenuates exponentially with distance traveled through the water. Lidar can measure depths from about , with vertical accuracy in the order of . The surface reflection makes water shallower than about difficult to resolve, and absorption limits the maximum depth. Turbidity causes scattering and has a significant role in determining the maximum depth that can be resolved in most situations, and dissolved pigments can increase absorption depending on wavelength. Other reports indicate that water penetration tends to be between two and three times Secchi depth. Bathymetric lidar is most useful in the depth range in coastal mapping. On average in fairly clear coastal seawater lidar can penetrate to about , and in turbid water up to about . An average value found by Saputra et al, 2021, is for the green laser light to penetrate water about one and a half to two times Secchi depth in Indonesian waters. Water temperature and salinity have an effect on the refractive index which has a small effect on the depth calculation. The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar. Full-waveform lidar Airborne lidar systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3-D point clouds. Recent studies investigated voxelisation. The intensities of the waveform samples are inserted into a voxelised space (3-D grayscale image) building up a 3-D representation of the scanned area. Related metrics and information can then be extracted from that voxelised space. Structural information can be extracted using 3-D metrics from local areas and there is a case study that used the voxelisation approach for detecting dead standing Eucalypt trees in Australia. Terrestrial Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The 3-D point clouds acquired from these types of scanners can be matched with digital images taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point. Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy. Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation. Applications There are a wide variety of lidar applications, in addition to the applications listed below, as it is often mentioned in National lidar dataset programs. These applications are largely determined by the range of effective object detection; resolution, which is how accurately the lidar identifies and classifies objects; and reflectance confusion, meaning how well the lidar can see something in the presence of bright objects, like reflective signs or bright sun. Companies are working to cut the cost of lidar sensors, currently anywhere from about US$1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets. Agriculture Agricultural robots have been used for a variety of purposes ranging from seed and fertilizer dispersions, sensing techniques as well as crop scouting for the task of weed control. Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield. Lidar is now used to monitor insects in the field. The use of lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China. Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants. Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage causes interference for agriculture equipment that would otherwise utilize a precise GNSS fix. Lidar sensors can detect and track the relative position of rows, plants, and other markers so that farming equipment can continue operating until a GNSS fix is reestablished. Plant species classification Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning. Lidar produces plant contours as a "point cloud" with range and reflectance values. This data is transformed, and features are extracted from it. If the species is known, the features are added as new data. The species is labelled and its features are initially stored as an example to identify the species in the real environment. This method is efficient because it uses a low-resolution lidar and supervised learning. It includes an easy-to-compute feature set with common statistical features which are independent of the plant size. Archaeology Lidar has many uses in archaeology, including planning of field campaigns, mapping features under forest canopy, and overview of broad, continuous features indistinguishable from the ground. Lidar can produce high-resolution datasets quickly and cheaply. Lidar-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data were used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape. In 2012, lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. In 2024, archaeologists using lidar discovered the Upano Valley sites. Autonomous vehicles Autonomous vehicles may use lidar for obstacle detection and avoidance to navigate safely through environments. The introduction of lidar was a pivotal occurrence that was the key enabler behind Stanley, the first autonomous vehicle to successfully complete the DARPA Grand Challenge. Point cloud output from the lidar sensor provides the necessary data for robot software to determine where potential obstacles exist in the environment and where the robot is in relation to those potential obstacles. Singapore's Singapore-MIT Alliance for Research and Technology (SMART) is actively developing technologies for autonomous lidar vehicles. The very first generations of automotive adaptive cruise control systems used only lidar sensors. Object detection for transportation systems In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding the vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which add to the driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this. Basics overview: Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead and the lower beams are used to detect lane markings and road features. The major advantage of using lidar is that the spatial structure is obtained and this data can be fused with other sensors such as radar, etc. to get a better picture of the vehicle environment in terms of static and dynamic properties of the objects present in the environment. Conversely, a significant issue with lidar is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from the lidar system are partially reflected off of rain droplets which adds noise to the data, called 'echoes'. Below mentioned are various approaches of processing lidar data and using it along with data from other sensors through sensor fusion to detect the vehicle environment conditions. Obstacle detection and road environment recognition using lidar This method proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a two-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. The road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces. Advantages Roadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data the road border can also be recognized. Also, the usage of a sensor with weather-robust head helps to detect the objects even in bad weather conditions. Canopy Height Model before and after flood is a good example. Lidar can detect highly detailed canopy height data as well as its road border. Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it. Lidar systems provide better range and a large field of view, which helps in detecting obstacles on the curves. This is one of its major advantages over RADAR systems, which have a narrower field of view. The fusion of lidar measurement with different sensors makes the system robust and useful in real-time applications, since lidar dependent systems cannot estimate the dynamic information about the detected object. It has been shown that lidar can be manipulated, such that self-driving cars are tricked into taking evasive action. Ecology and conservation Lidar has also found many applications for mapping natural and managed landscapes such as forests, wetlands, and grasslands. Canopy heights, biomass measurements, and leaf area can all be studied using airborne lidar systems. Similarly, lidar is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from lidar, including for recreational use such as in the production of orienteering maps. Lidar has also been applied to estimate and assess the biodiversity of plants, fungi, and animals. Using southern bull kelp in New Zealand, coastal lidar mapping data has been compared with population genomic evidence to form hypotheses regarding the occurrence and timing of prehistoric earthquake uplift events. Forestry Lidar systems have also been applied to improve forestry management. Measurements are used to take inventory in forest plots as well as calculate individual tree heights, crown width and crown diameter. Other statistical analysis use lidar data to estimate total plot information such as canopy volume, mean, minimum and maximum heights, vegetation cover, biomass, and carbon density. Aerial lidar has been used to map the bush fires in Australia in early 2020. The data was manipulated to view bare earth, and identify healthy and burned vegetation. Geology and soil science High-resolution digital elevation maps generated by airborne and stationary lidar have led to significant advances in geomorphology (the branch of geoscience concerned with the origin and evolution of the Earth surface topography). The lidar abilities to detect subtle topographic features such as river terraces and river channel banks, glacial landforms, to measure the land-surface elevation beneath the vegetation canopy, to better resolve spatial derivatives of elevation, to rockfall detection, to detect elevation changes between repeat surveys have enabled many novel studies of the physical and chemical processes that shape landscapes. In 2005 the Tour Ronde in the Mont Blanc massif became the first high alpine mountain on which lidar was employed to monitor the increasing occurrence of severe rock-fall over large rock faces allegedly caused by climate change and degradation of permafrost at high altitude. Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. Airborne lidar systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis. The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships. Atmosphere Initially, based on ruby lasers, lidar for meteorological applications was constructed shortly after the invention of the laser and represents one of the first applications of laser technology. Lidar technology has since expanded vastly in capability and lidar systems are used to perform a range of measurements that include profiling clouds, measuring winds, studying aerosols, and quantifying various atmospheric components. Atmospheric components can in turn provide useful information including surface pressure (by measuring the absorption of oxygen or nitrogen), greenhouse gas emissions (carbon dioxide and methane), photosynthesis (carbon dioxide), fires (carbon monoxide), and humidity (water vapor). Atmospheric lidars can be either ground-based, airborne or satellite-based depending on the type of measurement. Atmospheric lidar remote sensing works in two ways – by measuring backscatter from the atmosphere, and by measuring the scattered reflection off the ground (when the lidar is airborne) or other hard surface. Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as NASA's conical-scanning HARLIE, have been used to measure atmospheric wind velocity. The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing. The term, eolics, has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements. The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (less than 1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane. Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate imaging (millions of frames per second), as well as for speckle reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant. Law enforcement Lidar speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes. Additionally, it is used in forensics to aid in crime scene investigations. Scans of a scene are taken to record exact details of object placement, blood, and other important information for later review. These scans can also be used to determine bullet trajectory in cases of shootings. Military Few military applications are known to be in place and are classified (such as the lidar-based speed measurement of the AGM-129 ACM stealth nuclear cruise missile), but a considerable amount of research is underway in their use for imaging. Higher resolution systems collect enough detail to identify targets, such as tanks. Examples of military applications of lidar include the Airborne Laser Mine Detection System (ALMDS) for counter-mine warfare by Areté Associates. A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents. Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination. The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997. Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge. A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar. Mining For the calculation of ore volumes is accomplished by periodic (monthly) scanning in areas of ore removal, then comparing surface data to the previous scan. Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS) used in Rio Tinto's Mine of the Future. Physics and astronomy A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the Moon, allowing the position of the Moon to be measured with millimeter precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a lidar instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet. Laser altimeters produced global elevation models of Mars, the Moon (Lunar Orbiter Laser Altimeter (LOLA)) Mercury (Mercury Laser Altimeter (MLA)), NEAR–Shoemaker Laser Rangefinder (NLR). Future missions will also include laser altimeter experiments such as the Ganymede Laser Altimeter (GALA) as part of the Jupiter Icy Moons Explorer (JUICE) mission. In September, 2008, the NASA Phoenix lander used lidar to detect snow in the atmosphere of Mars. In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles. At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, lidar Thomson scattering is used to determine electron density and temperature profiles of the plasma. Rock mechanics Lidar has been widely used in rock mechanics for rock mass characterization and slope change detection. Some important geomechanical properties from the rock mass can be extracted from the 3-D point clouds obtained by means of the lidar. Some of these properties are: Discontinuity orientation Discontinuity spacing and RQD Discontinuity aperture Discontinuity persistence Discontinuity roughness Water infiltration Some of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes. THOR THOR is a laser designed toward measuring Earth's atmospheric conditions. The laser enters a cloud cover and measures the thickness of the return halo. The sensor has a fiber optic aperture with a width of that is used to measure the return light. Robotics Lidar technology is being used in robotics for the perception of the environment as well as object classification. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and crewed vehicles with a high degree of precision. Lidar are also widely used in robotics for simultaneous localization and mapping and well integrated into robot simulators. Refer to the Military section above for further examples. Spaceflight Lidar is increasingly being utilized for rangefinding and orbital element calculation of relative velocity in proximity operations and stationkeeping of spacecraft. Lidar has also been used for atmospheric studies from space. Short pulses of laser light beamed from a spacecraft can reflect off tiny particles in the atmosphere and back to a telescope aligned with the spacecraft laser. By precisely timing the lidar echo, and by measuring how much laser light is received by the telescope, scientists can accurately determine the location, distribution and nature of the particles. The result is a revolutionary new tool for studying constituents in the atmosphere, from cloud droplets to industrial pollutants, which are difficult to detect by other means." Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars. Surveying Airborne lidar sensors are used by companies in the remote sensing field. They can be used to create a DTM (Digital Terrain Model) or DEM (Digital Elevation Model); this is quite a common practice for larger areas as a plane can acquire wide swaths in a single flyover. Greater vertical accuracy of below can be achieved with a lower flyover, even in forests, where it is able to give the height of the canopy as well as the ground elevation. Typically, a GNSS receiver configured over a georeferenced control point is needed to link the data in with the WGS (World Geodetic System). Lidar is also in use in hydrographic surveying. Depending upon the clarity of the water lidar can measure depths from with a vertical accuracy of and horizontal accuracy of . Transport Lidar has been used in the railroad industry to generate asset health reports for asset management and by departments of transportation to assess their road conditions. CivilMaps.com is a leading company in the field. Lidar has been used in adaptive cruise control (ACC) systems for automobiles. Systems such as those by Siemens, Hella, Ouster and Cepton use a lidar device mounted on the front of the vehicle, such as the bumper, to monitor the distance between the vehicle and any vehicle in front of it. In the event, the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to accelerate to a speed preset by the driver. Refer to the Military section above for further examples. A lidar-based device, the Ceilometer is used at airports worldwide to measure the height of clouds on runway approach paths. Wind farm optimization Lidar can be used to increase the energy output from wind farms by accurately measuring wind speeds and wind turbulence. Experimental lidar systems can be mounted on the nacelle of a wind turbine or integrated into the rotating spinner to measure oncoming horizontal winds, winds in the wake of the wind turbine, and proactively adjust blades to protect components and increase power. Lidar is also used to characterise the incident wind resource for comparison with wind turbine power production to verify the performance of the wind turbine by measuring the wind turbine's power curve. Wind farm optimization can be considered a topic in applied eolics. Another aspect of lidar in wind related industry is to use computational fluid dynamics over lidar-scanned surfaces in order to assess the wind potential, which can be used for optimal wind farms placement. Solar photovoltaic deployment optimization Lidar can also be used to assist planners and developers in optimizing solar photovoltaic systems at the city level by determining appropriate roof tops and for determining shading losses. Recent airborne laser scanning efforts have focused on ways to estimate the amount of solar light hitting vertical building facades, or by incorporating more detailed shading losses by considering the influence from vegetation and larger surrounding terrain. Video games Recent simulation racing games such as rFactor Pro, iRacing, Assetto Corsa and Project CARS increasingly feature race tracks reproduced from 3-D point clouds acquired through lidar surveys, resulting in surfaces replicated with centimeter or millimeter precision in the in-game 3-D environment. The 2017 exploration game Scanner Sombre, by Introversion Software, uses lidar as a fundamental game mechanic. In Build the Earth, lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data. Other uses The video for the 2007 song "House of Cards" by Radiohead was believed to be the first use of real-time 3-D laser scanning to record a music video. The range data in the video is not completely from a lidar, as structured light scanning is also used. In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, lidar empowers portrait mode pictures with night mode, quickens auto focus and improves accuracy in the Measure app. In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the season 40 premiere. Alternative technologies Computer stereo vision has shown promise as an alternative to lidar for close range applications.
Technology
Surveying tools
null
41968
https://en.wikipedia.org/wiki/Gain%20%28electronics%29
Gain (electronics)
In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output port by adding energy converted from some power supply to the signal. It is usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input port. It is often expressed using the logarithmic decibel (dB) units ("dB gain"). A gain greater than one (greater than zero dB), that is, amplification, is the defining property of an active device or circuit, while a passive circuit will have a gain of less than one. The term gain alone is ambiguous, and can refer to the ratio of output to input voltage (voltage gain), current (current gain) or electric power (power gain). In the field of audio and general purpose amplifiers, especially operational amplifiers, the term usually refers to voltage gain, but in radio frequency amplifiers it usually refers to power gain. Furthermore, the term gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar transistor normally refers to forward current transfer ratio, either hFE ("beta", the static ratio of Ic divided by Ib at some operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Ic against Ib at a point). The gain of an electronic device or circuit generally varies with the frequency of the applied signal. Unless otherwise stated, the term refers to the gain for frequencies in the passband, the intended operating frequency range of the equipment. The term gain has a different meaning in antenna design; antenna gain is the ratio of radiation intensity from a directional antenna to (mean radiation intensity from a lossless antenna). Logarithmic units and decibels Power gain Power gain, in decibels (dB), is defined as follows: where is the power applied to the input, is the power from the output. A similar calculation can be done using a natural logarithm instead of a decimal logarithm, resulting in nepers instead of decibels: Voltage gain The power gain can be calculated using voltage instead of power using Joule's first law ; the formula is: In many cases, the input impedance and output impedance are equal, so the above equation can be simplified to: This simplified formula, the 20 log rule, is used to calculate a voltage gain in decibels and is equivalent to a power gain if and only if the impedances at input and output are equal. Current gain In the same way, when power gain is calculated using current instead of power, making the substitution , the formula is: In many cases, the input and output impedances are equal, so the above equation can be simplified to: This simplified formula is used to calculate a current gain in decibels and is equivalent to the power gain if and only if the impedances at input and output are equal. The "current gain" of a bipolar transistor, or , is normally given as a dimensionless number, the ratio of to (or slope of the -versus- graph, for ). In the cases above, gain will be a dimensionless quantity, as it is the ratio of like units (decibels are not used as units, but rather as a method of indicating a logarithmic relationship). In the bipolar transistor example, it is the ratio of the output current to the input current, both measured in amperes. In the case of other devices, the gain will have a value in SI units. Such is the case with the operational transconductance amplifier, which has an open-loop gain (transconductance) in siemens (mhos), because the gain is a ratio of the output current to the input voltage. Example Q. An amplifier has an input impedance of 50 ohms and drives a load of 50 ohms. When its input () is 1 volt, its output () is 10 volts. What is its voltage and power gain? A. Voltage gain is simply: The units V/V are optional but make it clear that this figure is a voltage gain and not a power gain. Using the expression for power, P = V2/R, the power gain is: Again, the units W/W are optional. Power gain is more usually expressed in decibels, thus: Unity gain A gain of factor 1 (equivalent to 0 dB) where both input and output are at the same voltage level and impedance is also known as unity gain.
Physical sciences
Electrical circuits
null
41985
https://en.wikipedia.org/wiki/Shortest%20path%20problem
Shortest path problem
In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment. Definition The shortest path problem can be defined for graphs whether undirected, directed, or mixed. The definition for undirected graphs states that every edge can be traversed in either direction. Directed graphs require that consecutive vertices be connected by an appropriate directed edge. Two vertices are adjacent when they are both incident to a common edge. A path in an undirected graph is a sequence of vertices such that is adjacent to for . Such a path is called a path of length from to . (The are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.) Let where is the edge incident to both and . Given a real-valued weight function , and an undirected (simple) graph , the shortest path from to is the path (where and ) that over all possible minimizes the sum When each edge in the graph has unit weight or , this is equivalent to finding the path with fewest edges. The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations: The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph. The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph. The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph. These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. Algorithms Several well-known algorithms exist for solving this problem and its variants. Dijkstra's algorithm solves the single-source shortest path problem with only non-negative edge weights. Bellman–Ford algorithm solves the single-source problem if edge weights may be negative. A* search algorithm solves for single-pair shortest path using heuristics to try to speed up the search. Floyd–Warshall algorithm solves all pairs shortest paths. Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs. Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node. Additional algorithms and associated evaluations may be found in . Single-source shortest paths Undirected graphs Unweighted graphs Directed acyclic graphs An algorithm using topological sorting can solve the single-source shortest path problem in time in arbitrarily-weighted directed acyclic graphs. Directed graphs with nonnegative weights The following table is taken from , with some corrections and additions. A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges, assuming integer edge weights. Directed graphs with arbitrary weights without negative cycles Directed graphs with arbitrary weights with negative cycles Finds a negative cycle or calculates distances to all vertices. Planar graphs with nonnegative weights Applications Network flows are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node. Shortest Path Problems can be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems. Transformation Steps Create a Residual Graph: For each edge (u, v) in the original graph, create two edges in the residual graph: (u, v) with capacity c(u, v) (v, u) with capacity 0 The residual graph represents the remaining capacity available in the network. Find the Shortest Path: Use a shortest path algorithm (e.g., Dijkstra's algorithm, Bellman-Ford algorithm) to find the shortest path from the source node to the sink node in the residual graph. Augment the Flow: Find the minimum capacity along the shortest path. Increase the flow on the edges of the shortest path by this minimum capacity. Decrease the capacity of the edges in the forward direction and increase the capacity of the edges in the backward direction. Update the Residual Graph: Update the residual graph based on the augmented flow. Repeat: Repeat steps 2-4 until no more paths can be found from the source to the sink. All-pairs shortest paths The all-pairs shortest path problem finds the shortest paths between every pair of vertices , in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced by , who observed that it could be solved by a linear number of matrix multiplications that takes a total time of . Undirected graph Directed graph Applications Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available. If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves. In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path. A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film. Other applications, often studied in operations research, include plant and facility layout, robotics, transportation, and VLSI design. Road networks A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension. There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs. All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network. The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond. Other techniques that have been used are: ALT (A* search, landmarks, and triangle inequality) Arc flags Contraction hierarchies Transit node routing Reach-based pruning Labeling Hub labels Related problems For shortest path problems in computational geometry, see Euclidean shortest path. The shortest multiple disconnected path is a representation of the primitive path network within the framework of Reptation theory. The widest path problem seeks a path so that the minimum label of any edge is as large as possible. Other related problems may be classified into the following categories. Paths with constraints Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are called Constrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem, which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problem NP-complete (such problems are not believed to be efficiently solvable for large sets of data, see P = NP problem). Another NP-complete example requires a specific set of vertices to be included in the path, which makes the problem similar to the Traveling Salesman Problem (TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem of finding the longest path in a graph is also NP-complete. Partial observability The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic. Strategic shortest paths Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use a variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights. Negative cycle detection In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose: The Bellman–Ford algorithm can be used to detect a negative cycle in time . Cherkassky and Goldberg survey several other algorithms for negative cycle detection. General algebraic framework on semirings: the algebraic path problem Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a semiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the algebraic path problem. Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures. More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of valuation algebras. Shortest path in stochastic time-dependent networks In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network. There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability. To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such as dynamic programming and Dijkstra's algorithm . These methods use stochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length. The terms travel time reliability and travel time variability are used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions. To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. The most reliable path is one that maximizes the probability of arriving on time given a travel time budget. An α-reliable path is one that minimizes the travel time budget required to arrive on time with a given probability.
Mathematics
Graph theory
null
41993
https://en.wikipedia.org/wiki/Intensity%20%28physics%29
Intensity (physics)
In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound), matter waves such as electrons in electron microscopes, and electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler. The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech. Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude. Mathematical description If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law. Applying the law of conservation of energy, if the net power emanating is constant, where is the net power radiated; is the intensity vector as a function of position; the magnitude is the intensity as a function of position; is a differential element of a closed surface that contains the source. If one integrates a uniform intensity, , over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes where is the intensity at the surface of the sphere; is the radius of the sphere; is the expression for the surface area of a sphere. Solving for gives If the medium is damped, then the intensity drops off more quickly than the above equation suggests. Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam, if is the complex amplitude of the electric field, then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by: and the local intensity is obtained by multiplying this expression by the wave velocity, where is the refractive index; is the speed of light in vacuum; is the vacuum permittivity. For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector. Electron beams For electron beams, intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography. Alternative definitions In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer.
Physical sciences
Optics
Physics
41997
https://en.wikipedia.org/wiki/Twin%20prime
Twin prime
A twin prime is a prime number that is either 2 less or 2 more than another prime number—for example, either member of the twin prime pair or In other words, a twin prime is a prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair. Twin primes become increasingly rare as one examines larger ranges, in keeping with the general tendency of gaps between adjacent primes to become larger as the numbers themselves get larger. However, it is unknown whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there is a largest pair. The breakthrough work of Yitang Zhang in 2013, as well as work by James Maynard, Terence Tao and others, has made substantial progress towards proving that there are infinitely many twin primes, but at present this remains unsolved. Properties Usually the pair is not considered to be a pair of twin primes. Since 2 is the only even prime, this pair is the only pair of prime numbers that differ by one; thus twin primes are as closely spaced as possible for any other two primes. The first several twin prime pairs are . Five is the only prime that belongs to two pairs, as every twin prime pair greater than is of the form for some natural number ; that is, the number between the two primes is a multiple of 6. As a result, the sum of any pair of twin primes (other than 3 and 5) is divisible by 12. Brun's theorem In 1915, Viggo Brun showed that the sum of reciprocals of the twin primes was convergent. This famous result, called Brun's theorem, was the first use of the Brun sieve and helped initiate the development of modern sieve theory. The modern version of Brun's argument can be used to show that the number of twin primes less than does not exceed for some absolute constant In fact, it is bounded above by where is the twin prime constant (slightly less than 2/3), given below. Twin prime conjecture The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states that there are infinitely many primes such that is also prime. In 1849, de Polignac made the more general conjecture that for every natural number , there are infinitely many primes such that is also prime. The of de Polignac's conjecture is the twin prime conjecture. A stronger form of the twin prime conjecture, the Hardy–Littlewood conjecture (see below), postulates a distribution law for twin primes akin to the prime number theorem. On 17 April 2013, Yitang Zhang announced a proof that there exists an integer that is less than 70 million, where there are infinitely many pairs of primes that differ by . Zhang's paper was accepted in early May 2013. Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhang's bound. One year after Zhang's announcement, the bound had been reduced to 246, where it remains. These improved bounds were discovered using a different approach that was simpler than Zhang's and was discovered independently by James Maynard and Terence Tao. This second approach also gave bounds for the smallest needed to guarantee that infinitely many intervals of width contain at least primes. Moreover (see also the next section) assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath Project wiki states that the bound is 12 and 6, respectively. A strengthening of Goldbach’s conjecture, if proved, would also prove there is an infinite number of twin primes, as would the existence of Siegel zeroes. Other theorems weaker than the twin prime conjecture In 1940, Paul Erdős showed that there is a constant and infinitely many primes such that where denotes the next prime after . What this means is that we can find infinitely many intervals that contain two primes as long as we let these intervals grow slowly in size as we move to bigger and bigger primes. Here, "grow slowly" means that the length of these intervals can grow logarithmically. This result was successively improved; in 1986 Helmut Maier showed that a constant can be used. In 2004 Daniel Goldston and Cem Yıldırım showed that the constant could be improved further to In 2005, Goldston, Pintz, and Yıldırım established that can be chosen to be arbitrarily small, i.e. On the other hand, this result does not rule out that there may not be infinitely many intervals that contain two primes if we only allow the intervals to grow in size as, for example, By assuming the Elliott–Halberstam conjecture or a slightly weaker version, they were able to show that there are infinitely many such that at least two of , , , , , , or are prime. Under a stronger hypothesis they showed that for infinitely many , at least two of , , , and are prime. The result of Yitang Zhang, is a major improvement on the Goldston–Graham–Pintz–Yıldırım result. The Polymath Project optimization of Zhang's bound and the work of Maynard have reduced the bound: the limit inferior is at most 246. Conjectures First Hardy–Littlewood conjecture The first Hardy–Littlewood conjecture (named after G. H. Hardy and John Littlewood) is a generalization of the twin prime conjecture. It is concerned with the distribution of prime constellations, including twin primes, in analogy to the prime number theorem. Let denote the number of primes such that is also prime. Define the twin prime constant as (Here the product extends over all prime numbers .) Then a special case of the first Hardy-Littlewood conjecture is that in the sense that the quotient of the two expressions tends to 1 as approaches infinity. (The second ~ is not part of the conjecture and is proven by integration by parts.) The conjecture can be justified (but not proven) by assuming that describes the density function of the prime distribution. This assumption, which is suggested by the prime number theorem, implies the twin prime conjecture, as shown in the formula for above. The fully general first Hardy–Littlewood conjecture on prime -tuples (not given here) implies that the second Hardy–Littlewood conjecture is false. This conjecture has been extended by Dickson's conjecture. Polignac's conjecture Polignac's conjecture from 1849 states that for every positive even integer , there are infinitely many consecutive prime pairs and such that (i.e. there are infinitely many prime gaps of size ). The case is the twin prime conjecture. The conjecture has not yet been proven or disproven for any specific value of , but Zhang's result proves that it is true for at least one (currently unknown) value of . Indeed, if such a did not exist, then for any positive even natural number there are at most finitely many such that for all and so for large enough we have which would contradict Zhang's result. Large twin primes Beginning in 2007, two distributed computing projects, Twin Prime Search and PrimeGrid, have produced several record-largest twin primes. , the current largest twin prime pair known is with 388,342 decimal digits. It was discovered in September 2016. There are 808,675,888,577,436 twin prime pairs below . An empirical analysis of all prime pairs up to 4.35 × shows that if the number of such pairs less than is then is about 1.7 for small and decreases towards about 1.3 as tends to infinity. The limiting value of is conjectured to equal twice the twin prime constant () (not to be confused with Brun's constant), according to the Hardy–Littlewood conjecture. Other elementary properties Every third odd number is divisible by 3, and therefore no three successive odd numbers can be prime unless one of them is 3. Therefore, 5 is the only prime that is part of two twin prime pairs. The lower member of a pair is by definition a Chen prime. If m − 4 or m + 6 is also prime then the three primes are called a prime triplet. It has been proven that the pair (m, m + 2) is a twin prime if and only if For a twin prime pair of the form (6n − 1, 6n + 1) for some natural number n > 1, n must end in the digit 0, 2, 3, 5, 7, or 8 (). If n were to end in 1 or 6, 6n would end in 6, and 6n −1 would be a multiple of 5. This is not prime unless n = 1. Likewise, if n were to end in 4 or 9, 6n would end in 4, and 6n +1 would be a multiple of 5. The same rule applies modulo any prime p ≥ 5: If n ≡ ±6−1 (mod p), then one of the pair will be divisible by p and will not be a twin prime pair unless 6n = p ±1. p = 5 just happens to produce particularly simple patterns in base 10. Isolated prime An isolated prime (also known as single prime or non-twin prime) is a prime number p such that neither p − 2 nor p + 2 is prime. In other words, p is not part of a twin prime pair. For example, 23 is an isolated prime, since 21 and 25 are both composite. The first few isolated primes are 2, 23, 37, 47, 53, 67, 79, 83, 89, 97, ... . It follows from Brun's theorem that almost all primes are isolated in the sense that the ratio of the number of isolated primes less than a given threshold n and the number of all primes less than n tends to 1 as n tends to infinity.
Mathematics
Prime numbers
null
42048
https://en.wikipedia.org/wiki/Plastic%20surgery
Plastic surgery
Plastic surgery is a surgical specialty involving the restoration, reconstruction, or alteration of the human body. It can be divided into two main categories: reconstructive surgery and cosmetic surgery. Reconstructive surgery covers a wide range of specialties, including craniofacial surgery, hand surgery, microsurgery, and the treatment of burns. This category of surgery focuses on restoring a body part or improving its function. In contrast, cosmetic (or aesthetic) surgery focuses solely on improving the physical appearance of the body. A comprehensive definition of plastic surgery has never been established, because it has no distinct anatomical object and thus overlaps with practically all other surgical specialties. An essential feature of plastic surgery is that it involves the treatment of conditions that require or may require tissue relocation skills. Etymology The word plastic in plastic surgery is in reference to the concept of "reshaping" and comes from the Greek πλαστική (τέχνη), plastikē (tekhnē), "the art of modelling" of malleable flesh. This meaning in English is seen as early as 1598. In the surgical context, the word "plastic" first appeared in 1816 and was established in 1838 by Eduard Zeis, preceding the modern technical usage of the word as "engineering material made from petroleum" by 70 years. History Treatments for the plastic repair of a broken nose are first mentioned in the Egyptian medical text called the Edwin Smith papyrus. The early trauma surgery textbook was named after the American Egyptologist, Edwin Smith. Reconstructive surgery techniques were being carried out in India by 800 BC. Sushruta was a physician who made contributions to the field of plastic and cataract surgery in the 6th century BC. The Romans also performed plastic cosmetic surgery, using simple techniques, such as repairing damaged ears, from around the 1st century BC. For religious reasons, they did not dissect either human beings or animals, thus, their knowledge was based in its entirety on the texts of their Greek predecessors. Notwithstanding, Aulus Cornelius Celsus left some accurate anatomical descriptions, some of which—for instance, his studies on the genitalia and the skeleton—are of special interest to plastic surgery. Arabs practiced the plastic surgery, during the Abbasid Caliphate in 750 AD. The Arabic translations made their way into Europe via intermediaries. In Italy, the Branca family of Sicily and Gaspare Tagliacozzi (Bologna) became familiar with the techniques of Sushruta. All fields of surgery, the Arab physician, surgeon, and chemist Al-Zahrawi talks of the use of silk thread suture to achieve good cosmesis. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia. He gives detailed descriptions of other basic surgical techniques such as cautery and wound management. British physicians travelled to India to see rhinoplasties being performed by Indian methods. Reports on Indian rhinoplasty performed by a Kumhar (potter) vaidya were published in the Gentleman's Magazine by 1794. Joseph Constantine Carpue spent 20 years in India studying local plastic surgery methods. Carpue was able to perform the first major surgery in the Western world in the year 1815. Instruments described in the Sushruta Samhita were further modified in the Western world. In 1465, Sabuncu's book, description, and classification of hypospadias were more informative and up to date. Localization of the urethral meatus was described in detail. Sabuncuoglu also detailed the description and classification of ambiguous genitalia. In mid-15th-century Europe, Heinrich von Pfolspeundt described a process "to make a new nose for one who lacks it entirely, and the dogs have devoured it" by removing skin from the back of the arm and suturing it in place. However, because of the dangers associated with surgery in any form, especially that involving the head or face, it was not until the 19th and 20th centuries that such surgery became common. In 1814, Joseph Carpue successfully performed an operative procedure on a British military officer who had lost his nose to the toxic effects of mercury treatments. In 1818, German surgeon Carl Ferdinand von Graefe published his major work entitled Rhinoplastik. Von Graefe modified the Italian method using a free skin graft from the arm instead of the original delayed pedicle flap. The first American plastic surgeon was John Peter Mettauer, who, in 1827, performed the first cleft palate operation with instruments that he designed himself. Johann Friedrich Dieffenbach specialized in skin transplantation and early plastic surgery. His work in rhinoplastic and maxillofacial surgery established many modern techniques of reconstructive surgery. In 1845, Dieffenbach wrote a comprehensive text on rhinoplasty, titled Operative Chirurgie, and introduced the concept of reoperation to improve the cosmetic appearance of the reconstructed nose. Dieffenbach has been called the "father of plastic surgery". Another case of plastic surgery for nose reconstruction from 1884 at Bellevue Hospital was described in Scientific American. In 1891, American otorhinolaryngologist John Roe presented an example of his work: a young woman on whom he reduced a dorsal nasal hump for cosmetic indications. In 1892, Robert Weir experimented unsuccessfully with xenografts (duck sternum) in the reconstruction of sunken noses. In 1896, James Israel, a urological surgeon from Germany, and in 1889 George Monks of the United States each described the successful use of heterogeneous free-bone grafting to reconstruct saddle nose defects. In 1898, Jacques Joseph, the German orthopaedic-trained surgeon, published his first account of reduction rhinoplasty. In 1910, Alexander Ostroumov, the Russian pharmacist, and perfume and cosmetics manufacturer, founded a unique plastic surgery department in his Moscow Institute of Medical Cosmetics. In 1928, Jacques Joseph published . Nascency of maxillofacial surgery The development of weapons such as machine guns and explosive shells during World War I created trench warfare, which led to a rapid increase in the number of mutilations to the faces and the heads of soldiers because the trenches mainly offered protection to the body. The surgeons, who were not prepared for these injuries, were even less prepared for a large number of injuries and had to react quickly and intelligently to treat the greatest number. Facial injuries were hard to treat on the front line and, because of the sanitary conditions, many infections could occur. Sometimes, some stitches were made on a jagged wound without considering the amount of flesh that had been lost, so the resulting scars were hideous and disfigured soldiers. Some of the wounded had severe injuries and the stitches were not sufficient so some became blind, or were left with gaping holes instead of their nose. Harold Gillies, scared by the number of new facial injuries and the lack of good surgical techniques, decided to dedicate an entire hospital to the reconstruction of facial injuries as fully as possible. He took into account the psychological dimension. Gillies introduced skin grafts to the treatments of soldiers, so they would be less horrified by looking at themselves in the mirror. It is the multidisciplinary approach to the treatment of facial lesions, bringing together plastic surgeons, dental surgeons, technicians, and specialized nurses, which has made it possible to develop techniques leading to the reconstruction of injured faces. Before the dentist Auguste Charles Valadier and then Gillies identified the need to advance the specialty of maxillofacial surgery which would be directly dedicated to the management of war wounds at this time. Gillies developed a new technique using rotational and transposition flaps but also bone grafts from the ribs and tibia to reconstruct facial defects caused by the weapons during the war. He experimented with this technique so he knew that he had to start by moving back healthy tissue to its normal position and then he would be able to fill with tissue from another place on the body of the soldier. One of the most successful techniques in skin grafting had the aim of not completely severing the connection to the body. It was possible by releasing and lifting a flap of skin from the wound. The flap of skin, still connected to the donor site, would then be swung over the site of the wound, allowing the maintenance of physical connection and ensuring that blood is supplied to the skin, increasing the chances of the skin graft being accepted by the body. At this time, we assisted also to improving in treating infections also meant that important injuries had become survivable mostly thanks to the new technique of Gillies. Some soldiers arrived at the hospital of Gillies without noses, chins, cheekbones, or even eyes. But for them, the most important trauma was psychological. Development of modern techniques The father of modern plastic surgery is generally considered to have been Sir Harold Gillies. A New Zealand otolaryngologist working in London, he developed many of the techniques of modern facial surgery in caring for soldiers with disfiguring facial injuries during the First World War. During World War I, he worked as a medical minder with the Royal Army Medical Corps. After working with the French oral and maxillofacial surgeon Hippolyte Morestin on skin grafts, he persuaded the army's chief surgeon, Arbuthnot-Lane, to establish a facial injury ward at the Cambridge Military Hospital, Aldershot, later upgraded to a new hospital for facial repairs at Sidcup in 1917. There Gillies and his colleagues developed many techniques of plastic surgery; more than 11,000 operations were performed on more than 5,000 men (mostly soldiers with facial injuries, usually from gunshot wounds). After the war, Gillies developed a private practice with Rainsford Mowlem, including many famous patients, and travelled extensively to promote his advanced techniques worldwide. In 1930, Gillies' cousin, Archibald McIndoe, joined the practice and became committed to plastic surgery. When World War II broke out, plastic surgery provision was largely divided between the different services of the armed forces, and Gillies and his team were split up. Gillies himself was sent to Rooksdown House near Basingstoke, which became the principal army plastic surgery unit; Tommy Kilner (who had worked with Gillies during the First World War, and who now has a surgical instrument named after him, the kilner cheek retractor) went to Queen Mary's Hospital, Roehampton; and Mowlem went to St Albans. McIndoe, consultant to the RAF, moved to the recently rebuilt Queen Victoria Hospital in East Grinstead, Sussex, and founded a Centre for Plastic and Jaw Surgery. There, he treated very deep burns, and serious facial disfigurement, such as loss of eyelids, typical of those caused to aircrew by burning fuel. McIndoe is often recognized for not only developing new techniques for treating badly burned faces and hands but also for recognising the importance of the rehabilitation of the casualties and particularly of social reintegration back into normal life. He disposed of the "convalescent uniforms" and let the patients use their service uniforms instead. With the help of two friends, Neville and Elaine Blond, he also convinced the locals to support the patients and invite them to their homes. McIndoe kept referring to them as "his boys" and the staff called him "The Boss" or "The Maestro". His other important work included the development of the walking-stalk skin graft, and the discovery that immersion in saline promoted healing as well as improving survival rates for patients with extensive burns—this was a serendipitous discovery drawn from observation of differential healing rates in pilots who had come down on land and in the sea. His radical, experimental treatments led to the formation of the Guinea Pig Club at Queen Victoria Hospital, Sussex. Among the better-known members of his "club" were Richard Hillary, Bill Foxley and Jimmy Edwards. Sub-specialties Plastic surgery is a broad field, and may be subdivided further. In the United States, plastic surgeons are board certified by American Board of Plastic Surgery. Subdisciplines of plastic surgery may include: Aesthetic surgery Aesthetic surgery is a central component of plastic surgery and includes facial and body aesthetic surgery. Plastic surgeons use cosmetic surgical principles in all reconstructive surgical procedures as well as isolated operations to improve overall appearance. Burn surgery Burn surgery generally takes place in two phases. Acute burn surgery is the treatment immediately after a burn. Reconstructive burn surgery takes place after the burn wounds have healed. Craniofacial surgery Craniofacial surgery is divided into pediatric and adult craniofacial surgery. Pediatric craniofacial surgery mostly revolves around the treatment of congenital anomalies of the craniofacial skeleton and soft tissues, such as cleft lip and palate, microtia, craniosynostosis, and pediatric fractures. Adult craniofacial surgery deals mostly with reconstructive surgeries after trauma or cancer and revision surgeries along with orthognathic surgery and facial feminization surgery. Craniofacial surgery is an important part of all plastic surgery training programs. Further training and subspecialisation is obtained via a craniofacial fellowship. Craniofacial surgery is also practiced by maxillofacial surgeons. Ethnic plastic surgery Ethnic plastic surgery is plastic surgery performed to change ethnic attributes, often considered used as a way of "passing". Hand surgery Hand surgery is concerned with acute injuries and chronic diseases of the hand and wrist, correction of congenital malformations of the upper extremities, and peripheral nerve problems (such as brachial plexus injuries or carpal tunnel syndrome). Hand surgery is an important part of training in plastic surgery, as well as microsurgery, which is necessary to replant an amputated extremity. The hand surgery field is also practiced by orthopedic surgeons and general surgeons. Scar tissue formation after surgery can be problematic on the delicate hand, causing loss of dexterity and digit function if severe enough. There have been cases of surgery on women's hands in order to correct perceived flaws to create the perfect engagement ring photo. Microsurgery Microsurgery is generally concerned with the reconstruction of missing tissues by transferring a piece of tissue to the reconstruction site and reconnecting blood vessels. Popular subspecialty areas are breast reconstruction, head and neck reconstruction, hand surgery/replantation, and brachial plexus surgery. Pediatric plastic surgery Children often face medical issues very different from the experiences of an adult patient. Many birth defects or syndromes present at birth are best treated in childhood, and pediatric plastic surgeons specialize in treating these conditions in children. Conditions commonly treated by pediatric plastic surgeons include craniofacial anomalies, Syndactyly (webbing of the fingers and toes), Polydactyly (excess fingers and toes at birth), cleft lip and palate, and congenital hand deformities. Prison plastic surgery Plastic surgery performed on an incarcerated population in order to affect their recidivism rate, a practice instituted in the early 20th century that lasted until the mid-1990s. Separate from surgery performed for medical need. Techniques and procedures In plastic surgery, the transfer of skin tissue (skin grafting) is a very common procedure. Skin grafts can be derived from the recipient or donors: Autografts are taken from the recipient. If absent or deficient of natural tissue, alternatives can be cultured sheets of epithelial cells in vitro or synthetic compounds, such as integra, which consists of silicone and bovine tendon collagen with glycosaminoglycans. Allografts are taken from a donor of the same species. Kidney transplants are an example of allograft transfer. Joseph Murray is credited for completing the first successful kidney transplantation in 1954. Xenografts are taken from a donor of a different species. Usually, good results would be expected from plastic surgery that emphasize careful planning of incisions so that they fall within the line of natural skin folds or lines, appropriate choice of wound closure, use of best available suture materials, and early removal of exposed sutures so that the wound is held closed by buried sutures. Cosmetic surgery procedures Cosmetic surgery is a voluntary or elective surgery that is performed on normal parts of the body with the only purpose of improving a person's appearance or removing signs of aging. Some cosmetic surgeries such as breast reduction are also functional and can help to relieve symptoms of discomfort such as back ache or neck ache. Cosmetic surgeries are also undertaken following breast cancer and mastectomy to recreate the natural breast shape which has been lost during the process of removing the cancer. In 2014, nearly 16 million cosmetic procedures were performed in the United States alone. The number of cosmetic procedures performed in the United States has almost doubled since the start of the century. 92% of cosmetic procedures were performed on women in 2014, up from 88% in 2001. 15.6 million cosmetic procedures were performed in 2020, with the five most common surgeries being rhinoplasties, blepharoplasties, rhytidectomies, liposuctions, and breast augmentation. Breast augmentation continues to be one of the top 5 cosmetic surgical procedures and has been since 2006. Silicone implants were used in 84% and saline implants in 16% of all breast augmentations in 2020. The American Society for Aesthetic Plastic Surgery looked at the statistics for 34 different cosmetic procedures. Nineteen of the procedures were surgical, such as rhinoplasties or rhytidectomies. The nonsurgical procedures included botox and laser hair removal. In 2010, their survey revealed that there were 9,336,814 total procedures in the United States. Of those, 1,622,290 procedures were surgical (p. 5). They also found that a large majority, 81%, of the procedures were done on Caucasian people (p. 12). In 1949, 15,000 Americans underwent cosmetic surgery procedures and by 1969 this number rose to almost half a million people. The American Society of Plastic Surgeons estimates that more than 333,000 cosmetic procedures were performed on patients 18 years of age or younger in the US in 2005 compared to approx. 14,000 in 1996. In 2018, more than 226,994 patients between the ages of 13 and 19 underwent plastic surgery compared to just over 218,900 patients in the same age group in 2010. Concerns about young people undergoing plastic surgery include the financial burden of additional surgical procedures needed to correct problems after the initial cosmetic surgery, long-term health complications from plastic surgery, and unaddressed mental health issues that may have led to surgery. The increased use of cosmetic procedures crosses racial and ethnic lines in the U.S., with increases seen among African-Americans, Asian Americans and Hispanic Americans as well as Caucasian Americans. In Asia, cosmetic surgery has become more popular, and countries such as China and India have become Asia's biggest cosmetic surgery markets. South Korea is also rising in popularity in Asian and Western countries due to their expertise in facial bone surgeries (see cosmetic surgery in South Korea). Plastic surgery is increasing slowly, rising 115% from 2000 to 2015. "According to the annual plastic surgery procedural statistics, there were 15.9 million surgical and minimally-invasive cosmetic procedures performed in the United States in 2015, a 2 percent increase over 2014." A study from 2021 found that requests for cosmetic procedures had increased significantly since the beginning of the COVID-19 pandemic, possibly due to the increase in videoconferencing; cited estimates include a 10% increase in the United States and a 20% increase in France. The most popular aesthetic/cosmetic procedures include: Abdominoplasty ("tummy tuck"): reshaping and firming of the abdomen Blepharoplasty ("eyelid surgery"): reshaping of upper/lower eyelids including Asian blepharoplasty Phalloplasty ("penile surgery"): construction (or reconstruction) of a penis or, sometimes, artificial modification of the penis by surgery, often for cosmetic purposes Mammoplasty: Breast augmentations ("breast implant" or "boob job"): augmentation of the breasts by means of fat grafting, saline, or silicone gel prosthetics, which was initially performed for women with micromastia Reduction mammoplasty ("breast reduction"): removal of skin and glandular tissue, which is done to reduce back and shoulder pain in women with gigantomastia and for men with gynecomastia Mastopexy ("breast lift"): Lifting or reshaping of breasts to make them less saggy, often after weight loss (after a pregnancy, for example). It involves removal of breast skin as opposed to glandular tissue Augmentation mastopexy ("breast lift with breast implants"): Lifting breasts to make them less saggy, repositioning the nipple to a higher location, and increasing breast size with saline or silicone gel implants. Recent studies of a newer technique for simultaneous augmentation mastopexy (SAM) indicate that it is a safe surgical procedure with minimal medical complications. The SAM technique involves invaginating and tacking the tissues first, in order to previsualize the result, before making any surgical incisions to the breast. Buttock augmentation ("butt implant"): enhancement of the buttocks using silicone implants or fat grafting ("Brazilian butt lift") where fat is transferred from other areas of the body Cryolipolysis: refers to a medical device used to destroy fat cells. Its principle relies on controlled cooling for non-invasive local reduction of fat deposits to reshape body contours. Cryoneuromodulation: Treatment of superficial and subcutaneous tissue structures using gaseous nitrous oxide, including temporary wrinkle reduction, temporary pain reduction, treatment of dermatologic conditions, and focal cryo-treatment of tissue Calf augmentation: done by silicone implants or fat transfer to add bulk to calf muscles Labiaplasty: surgical reduction and reshaping of the labia Lip augmentation: alter the appearance of the lips by increasing their fullness through surgical enlargement with lip implants or nonsurgical enhancement with injectable fillers Cheiloplasty: surgical reconstruction of the lip Rhinoplasty ("nose job"): reshaping of the nose sometimes used to correct breathing impaired by structural defects. Otoplasty ("ear surgery"/"ear pinning"): reshaping of the ear, most often done by pinning the protruding ear closer to the head. Rhytidectomy ("face lift"): removal of wrinkles and signs of aging from the face Neck lift: tightening of lax tissues in the neck. This procedure is often combined with a facelift for lower face rejuvenation. Browplasty ("brow lift" or "forehead lift"): elevates eyebrows, smooths forehead skin Midface lift ("cheek lift"): tightening of the cheeks Genioplasty: augmentation of the chin with an individual's bones or with the use of an implant, usually silicone, by suture of the soft tissue Mentoplasty: surgery to the chin. This can involve either enhancing or reducing the size of the chin. Enhancements are achieved with the use of facial implants. Reduction of the chin involved reducing the size of the chin bone. Cheek augmentation ("cheek implant"): implants to the cheek Orthognathic surgery: altering the upper and lower jaw bones (through osteotomy) to correct jaw alignment issues and correct the teeth alignment Fillers injections: collagen, fat, and other tissue filler injections, such as hyaluronic acid Brachioplasty ("Arm lift"): reducing excess skin and fat between the underarm and the elbow Laser skin rejuvenation or laser resurfacing: the lessening of depth of facial pores and exfoliation of dead or damaged skin cells Liposuction ("suction lipectomy"): removal of fat deposits by traditional suction technique or ultrasonic energy to aid fat removal Zygoma reduction plasty: reducing the facial width by performing osteotomy and resecting part of the zygomatic bone and arch Jaw reduction: reduction of the mandible angle to smooth out an angular jaw and creating a slim jaw Buccal fat extraction: extraction of the buccal pads Body contouring: the removal of this excess skin and fat from numerous areas of the body, restoring the appearance of skin elasticity of the remaining skin. The surgery is prominent in those who have undergone significant weight loss resulting in excess sagging skin being present around areas of the body. The skin loses elasticity (a condition called elastosis) once it has been stretched past capacity and is unable to recoil back to its standard position against the body and also with age. Sclerotherapy: removing visible 'spider veins' (Telangiectasia), which appear on the surface of the skin. Dermal fillers: Dermal fillers are injected below the skin to give a more fuller, youthful appearance of a feature or section of the face. One type of dermal filler is Hyaluronic acid. Hyaluronic acid is naturally found throughout the human body. It plays a vital role in moving nutrients to the cells of the skin from the blood. It is also commonly used in patients with arthritis as it acts like a cushion to the bones which have depleted the articular cartilage casing. Development within this field has occurred over time with synthetic forms of hyaluronic acid is being created, playing roles in other forms of cosmetic surgery such as facial augmentation. Micropigmentation: is the creation of permanent makeup using natural pigments to places such as the eyes to create the effect of eye shadow, lips creating lipstick and cheek bones to create a blush like look. The pigment is inserted beneath the skin using a machine which injects a small needle at a very fast rate carrying pigment into the skin, creating a lasting colouration of the desired area. In 2015, the most popular surgeries were botox, liposuction, blepharoplasties, breast implants, rhynoplasties, and rhytidectomies. According to the 2020 Plastic Surgery Statistics Report, which is published by the American Society of Plastic Surgeons, the most surgical procedure performed in the U.S. was rhinoplasty (nose reshaping) accounting for 15.2% of all cosmetic surgical procedures that year, followed by blepharoplasty (eyelid surgery), which accounted for 14% of all procedures. The third most populous procedure was rhytidectomy (facelift) (10% of all procedures), then liposuction (9.1% of all procedures). Complications, risks, and reversals All surgery has risks. Common complications of cosmetic surgery includes hematoma, nerve injury, infection, scarring, implant failure and end organ damage. Breast implants can have many complications, including rupture. In a study of his 4761 augmentation mammaplasty patients, Eisenberg reported that overfilling saline breast implants 10–13% significantly reduced the rupture-deflation rate to 1.83% at 8-years post-implantation. In 2011 FDA stated that one in five patients who received implants for breast augmentation will need them removed within 10 years of implantation. Psychological disorders Although media and advertising do play a large role in influencing many people's lives, such as by making people believe plastic surgery to be an acceptable course to change one's identity to their liking, researchers believe that plastic surgery obsession is linked to psychological disorders such as body dysmorphic disorder. There exists a correlation between those with BDD and the predilection toward cosmetic plastic surgery in order to correct a perceived defect in their appearance. BDD is a disorder resulting in the individual becoming "preoccupied with what they regard as defects in their bodies or faces". Alternatively, where there is a slight physical anomaly, then the person's concern is markedly excessive. While 2% of people have body dysmorphic disorder in the United States, 15% of patients seeing a dermatologist and cosmetic surgeons have the disorder. Half of the patients with the disorder who have cosmetic surgery performed are not pleased with the aesthetic outcome. BDD can lead to suicide in some people with the condition. While many with BDD seek cosmetic surgery, the procedures do not treat BDD, and can ultimately worsen the problem. The psychological root of the problem is usually unidentified; therefore causing the treatment to be even more difficult. Some say that the fixation or obsession with correction of the area could be a sub-disorder such as anorexia or muscle dysmorphia. The increased use of body and facial reshaping applications such as Snapchat and Facetune have been identified as potential triggers of BDD. Recently, a phenomenon referred to as 'Snapchat dysmorphia' has appeared to describe people who request surgery to resemble the edited version of themselves as they appear through Snapchat filters. In response to the detrimental trend, Instagram banned all augmented reality (AR) filters that depict or promote cosmetic surgery. In some cases, people whose physicians refuse to perform any further surgeries, have turned to "do it yourself" plastic surgery, injecting themselves and facing extreme safety risks.
Biology and health sciences
Medical procedures
null
42052
https://en.wikipedia.org/wiki/Mescaline
Mescaline
Mescaline, also known as mescalin or mezcalin, and in chemical terms 3,4,5-trimethoxyphenethylamine, is a naturally occurring psychedelic protoalkaloid of the substituted phenethylamine class, known for its hallucinogenic effects comparable to those of LSD and psilocybin. It binds to and activates certain serotonin receptors in the brain, producing hallucinogenic effects. Biological sources It occurs naturally in several species of cacti. It is also reported to be found in small amounts in certain members of the bean family, Fabaceae, including Senegalia berlandieri (syn. Acacia berlandieri), although these reports have been challenged and have been unsupported in any additional analyses. As shown in the accompanying table, the concentration of mescaline in different specimens can vary largely within a single species. Moreover, the concentration of mescaline within a single specimen varies as well. History and use Peyote has been used for at least 5,700 years by Indigenous peoples of the Americas in Mexico. Europeans recorded use of peyote in Native American religious ceremonies upon early contact with the Huichol people in Mexico. Other mescaline-containing cacti such as the San Pedro have a long history of use in South America, from Peru to Ecuador. While religious and ceremonial peyote use was widespread in the Aztec Empire and northern Mexico at the time of the Spanish conquest, religious persecution confined it to areas near the Pacific coast and up to southwest Texas. However, by 1880, peyote use began to spread north of South-Central America with "a new kind of peyote ceremony" inaugurated by the Kiowa and Comanche people. These religious practices, incorporated legally in the United States in 1920 as the Native American Church, have since spread as far as Saskatchewan, Canada. In traditional peyote preparations, the top of the cactus is cut off, leaving the large tap root along with a ring of green photosynthesizing area to grow new heads. These heads are then dried to make disc-shaped buttons. Buttons are chewed to produce the effects or soaked in water to drink. However, the taste of the cactus is bitter, so modern users will often grind it into a powder and pour it into capsules to avoid having to taste it. The typical dosage is 200–400 milligrams of mescaline sulfate or 178–356 milligrams of mescaline hydrochloride. The average peyote button contains about 25 mg mescaline. Some analyses of traditional preparations of San Pedro cactus have found doses ranging from 34 mg to 159 mg of total alkaloids, a relatively low and barely psychoactive amount. It appears that patients who receive traditional treatments with San Pedro ingest sub-psychoactive doses and do not experience psychedelic effects. Botanical studies of peyote began in the 1840s and the drug was listed in the Mexican pharmacopeia. The first of mescal buttons was published by John Raleigh Briggs in 1887. Mescaline was first isolated and identified in 1896 or 1897 by the German chemist Arthur Heffter and his colleagues. He showed that mescaline was exclusively responsible for the psychoactive or hallucinogenic effects of peyote. However, other components of peyote, such as hordenine, pellotine, and anhalinine, are also active. Mescaline was first synthesized in 1919 by Ernst Späth. In 1955, English politician Christopher Mayhew took part in an experiment for BBC's Panorama, in which he ingested 400 mg of mescaline under the supervision of psychiatrist Humphry Osmond. Though the recording was deemed too controversial and ultimately omitted from the show, Mayhew praised the experience, calling it "the most interesting thing I ever did". Studies of the potential therapeutic effects of mescaline started in the 1950s. The mechanism of action of mescaline, activation of the serotonin 5-HT2A receptors, became known in the 1990s. Potential medical usage Mescaline has a wide array of suggested medical usage, including treatment of depression, anxiety, PTSD, and alcoholism. However, its status as a Schedule I controlled substance in the Convention on Psychotropic Substances limits availability of the drug to researchers. Because of this, very few studies concerning mescaline's activity and potential therapeutic effects in people have been conducted since the early 1970s. Behavioral and non-behavioral effects Mescaline induces a psychedelic state comparable to those produced by LSD and psilocybin, but with unique characteristics. Subjective effects may include altered thinking processes, an altered sense of time and self-awareness, and closed- and open-eye visual phenomena. Prominence of color is distinctive, appearing brilliant and intense. Recurring visual patterns observed during the mescaline experience include stripes, checkerboards, angular spikes, multicolor dots, and very simple fractals that turn very complex. The English writer Aldous Huxley described these self-transforming amorphous shapes as like animated stained glass illuminated from light coming through the eyelids in his autobiographical book The Doors of Perception (1954). Like LSD, mescaline induces distortions of form and kaleidoscopic experiences but they manifest more clearly with eyes closed and under low lighting conditions. Heinrich Klüver coined the term "cobweb figure" in the 1920s to describe one of the four form constant geometric visual hallucinations experienced in the early stage of a mescaline trip: "Colored threads running together in a revolving center, the whole similar to a cobweb". The other three are the chessboard design, tunnel, and spiral. Klüver wrote that "many 'atypical' visions are upon close inspection nothing but variations of these form-constants." As with LSD, synesthesia can occur especially with the help of music. An unusual but unique characteristic of mescaline use is the "geometrization" of three-dimensional objects. The object can appear flattened and distorted, similar to the presentation of a Cubist painting. Mescaline elicits a pattern of sympathetic arousal, with the peripheral nervous system being a major target for this substance. According to a research project in the Netherlands, ceremonial San Pedro use seems to be characterized by relatively strong spiritual experiences, and low incidence of challenging experiences. Chemistry Mescaline, also known as 3,4,5-trimethoxyphenethylamine (3,4,5-TMPEA), is a substituted phenethylamine derivative. It is closely structurally related to the catecholamine neurotransmitters dopamine, norepinephrine, and epinephrine. The drug is relatively hydrophilic with low fat solubility. Its predicted log P (XLogP3) is 0.7. Biosynthesis Mescaline is biosynthesized from tyrosine, which, in turn, is derived from phenylalanine by the enzyme phenylalanine hydroxylase. In Lophophora williamsii (Peyote), dopamine converts into mescaline in a biosynthetic pathway involving m-O-methylation and aromatic hydroxylation. Tyrosine and phenylalanine serve as metabolic precursors towards the synthesis of mescaline. Tyrosine can either undergo a decarboxylation via tyrosine decarboxylase to generate tyramine and subsequently undergo an oxidation at carbon 3 by a monophenol hydroxylase or first be hydroxylated by tyrosine hydroxylase to form L-DOPA and decarboxylated by DOPA decarboxylase. These create dopamine, which then experiences methylation by a catechol-O-methyltransferase (COMT) by an S-adenosyl methionine (SAM)-dependent mechanism. The resulting intermediate is then oxidized again by a hydroxylase enzyme, likely monophenol hydroxylase again, at carbon 5, and methylated by COMT. The product, methylated at the two meta positions with respect to the alkyl substituent, experiences a final methylation at the 4 carbon by a guaiacol-O-methyltransferase, which also operates by a SAM-dependent mechanism. This final methylation step results in the production of mescaline. Phenylalanine serves as a precursor by first being converted to L-tyrosine by L-amino acid hydroxylase. Once converted, it follows the same pathway as described above. Laboratory synthesis Mescaline was first synthesized in 1919 by Ernst Späth from 3,4,5-trimethoxybenzoyl chloride. Several approaches using different starting materials have been developed since, including the following: Hofmann rearrangement of 3,4,5-trimethoxyphenylpropionamide. Cyanohydrin reaction between potassium cyanide and 3,4,5-trimethoxybenzaldehyde followed by acetylation and reduction. Henry reaction of 3,4,5-trimethoxybenzaldehyde with nitromethane followed by nitro compound reduction of ω-nitrotrimethoxystyrene. Ozonolysis of elemicin followed by reductive amination. Ester reduction of Eudesmic acid's methyl ester followed by halogenation, Kolbe nitrile synthesis, and nitrile reduction. Amide reduction of 3,4,5-trimethoxyphenylacetamide. Reduction of 3,4,5-trimethoxy(2-nitrovinyl)benzene with lithium aluminum hydride. Treatment of tricarbonyl-(η6-1,2,3-trimethoxybenzene) chromium complex with acetonitrile carbanion in THF and iodine, followed by reduction of the nitrile with lithium aluminum hydride. Pharmacology Pharmacodynamics In plants, mescaline may be the end-product of a pathway utilizing catecholamines as a method of stress response, similar to how animals may release such compounds and others such as cortisol when stressed. The in vivo function of catecholamines in plants has not been investigated, but they may function as antioxidants, as developmental signals, and as integral cell wall components that resist degradation from pathogens. The deactivation of catecholamines via methylation produces alkaloids such as mescaline. In humans, mescaline acts similarly to other psychedelic agents. It acts as an agonist, binding to and activating the serotonin 5-HT2A receptor. Its at the serotonin 5-HT2A receptor is approximately 10,000nM and at the serotonin 5-HT2B receptor is greater than 20,000nM. How activating the 5-HT2A receptor leads to psychedelic effects is still unknown, but it is likely that somehow it involves excitation of neurons in the prefrontal cortex. In addition to the serotonin 5-HT2A and 5-HT2B receptors, mescaline is also known to bind to the serotonin 5-HT2C receptor and a number of other targets. Mescaline lacks affinity for the monoamine transporters, including the serotonin transporter (SERT), norepinephrine transporter (NET), and dopamine transporter (DAT) (Ki > 30,000nM). However, it has been found to increase levels of the major serotonin metabolite 5-hydroxyindoleacetic acid (5-HIAA) at high doses in rodents. This finding suggests that mescaline might inhibit the reuptake and/or induce the release of serotonin at such doses. However, this possibility has not yet been further assessed or demonstrated. Besides serotonin, mescaline might also weakly induce the release of dopamine, but this is probably of modest significance, if it occurs. In accordance, there is no evidence of the drug showing addiction or dependence. Other psychedelic phenethylamines, including the closely related 2C, DOx, and TMA drugs, are inactive as monoamine releasing agents and reuptake inhibitors. However, an exception is trimethoxyamphetamine (TMA), the amphetamine analogue of mescaline, which is a very low-potency serotonin releasing agent ( = 16,000nM). The possible monoamine-releasing effects of mescaline would likely be related to its structural similarity to substituted amphetamines and related compounds. Tolerance to mescaline builds with repeated usage, lasting for a few days. The drug causes cross-tolerance with other serotonergic psychedelics such as LSD and psilocybin. The LD50 of mescaline has been measured in various animals: 212–315 mg/kg i.p. (mice), 132–410 mg/kg i.p. (rats), 328 mg/kg i.p. (guinea pigs), 54mg/kg in dogs, and 130mg/kg i.v. in rhesus macaques. For humans, the LD50 of mescaline has been reported to be approximately 880 mg/kg. It has been said that it would be very difficult to consume enough mescaline to cause death in humans. Mescaline is a relatively low-potency psychedelic, with active doses in the hundreds of milligrams and micromolar affinities for the serotonin 5-HT2A receptor. For comparison, psilocybin is approximately 20-fold more potent (doses in the tens of milligrams) and lysergic acid diethylamide (LSD) is approximately 2,000-fold more potent (doses in the tens to hundreds of micrograms). There have been efforts to develop more potent analogues of mescaline. Difluoromescaline and trifluoromescaline are more potent than mescaline, as is its amphetamine homologue TMA. Escaline and proscaline are also both more potent than mescaline, showing the importance of the 4-position substituent with regard to receptor binding. Pharmacokinetics About half the initial dosage is excreted after 6hours, but some studies suggest that it is not metabolized at all before excretion. Mescaline appears not to be subject to metabolism by CYP2D6 and between 20% and 50% of mescaline is excreted in the urine unchanged, with the rest being excreted as the deaminated-oxidised-carboxylic acid form of mescaline, a likely result of monoamine oxidase (MAO) degradation. However, the enzymes mediating the oxidative deamination of mescaine are controversial. MAO, diamine oxidase (DAO), and/or other enzymes may be involved or responsible. The previously reported elimination half-life of mescaline was originally reported to be 6hours, but a new study published in 2023 reported a half-life of 3.6hours. The higher estimate is believed to be due to small sample numbers and collective measurement of mescaline metabolites. Mescaline appears to have relatively poor blood–brain barrier permeability due to its low lipophilicity. However, it is still able to cross into the central nervous system and produce psychoactive effects at sufficienty high doses. Active metabolites of mescaline may contribute to its psychoactive effects. Legal status United States In the United States, mescaline was made illegal in 1970 by the Comprehensive Drug Abuse Prevention and Control Act, categorized as a Schedule I hallucinogen. The drug is prohibited internationally by the 1971 Convention on Psychotropic Substances. Mescaline is legal only for certain religious groups (such as the Native American Church by the American Indian Religious Freedom Act of 1978) and in scientific and medical research. In 1990, the Supreme Court ruled that the state of Oregon could ban the use of mescaline in Native American religious ceremonies. The Religious Freedom Restoration Act (RFRA) in 1993 allowed the use of peyote in religious ceremony, but in 1997, the Supreme Court ruled that the RFRA is unconstitutional when applied against states. Many states, including the state of Utah, have legalized peyote usage with "sincere religious intent", or within a religious organization, regardless of race. Synthetic mescaline, but not mescaline derived from cacti, was officially decriminalized in the state of Colorado by ballot measure Proposition 122 in November 2022. While mescaline-containing cacti of the genus Echinopsis are technically controlled substances under the Controlled Substances Act, they are commonly sold publicly as ornamental plants. United Kingdom In the United Kingdom, mescaline in purified powder form is a Class A drug. However, dried cactus can be bought and sold legally. Australia Mescaline is considered a schedule 9 substance in Australia under the Poisons Standard (February 2020). A schedule 9 substance is classified as "Substances with a high potential for causing harm at low exposure and which require special precautions during manufacture, handling or use. These poisons should be available only to specialised or authorised users who have the skills necessary to handle them safely. Special regulations restricting their availability, possession, storage or use may apply." Other countries In Canada, France, The Netherlands and Germany, mescaline in raw form and dried mescaline-containing cacti are considered illegal drugs. However, anyone may grow and use peyote, or Lophophora williamsii, as well as Echinopsis pachanoi and Echinopsis peruviana without restriction, as it is specifically exempt from legislation. In Canada, mescaline is classified as a schedule III drug under the Controlled Drugs and Substances Act, whereas peyote is exempt. In Russia mescaline, its derivatives and mescaline-containing plants are banned as narcotic drugs (Schedule I). Notable users Salvador Dalí experimented with mescaline believing it would enable him to use his subconscious to further his art potential Antonin Artaud wrote 1947's The Peyote Dance, where he describes his peyote experiences in Mexico a decade earlier. Jerry Garcia took peyote prior to forming The Grateful Dead but later switched to LSD and DMT since they were easier on the stomach. Allen Ginsberg took peyote. Part II of his poem "Howl" was inspired by a peyote vision that he had in San Francisco. Ken Kesey took peyote prior to writing One Flew Over the Cuckoo's Nest. Jean-Paul Sartre took mescaline shortly before the publication of his first book, L'Imaginaire; he had a bad trip during which he imagined that he was menaced by sea creatures. For many years following this, he persistently thought that he was being followed by lobsters, and became a patient of Jacques Lacan in hopes of being rid of them. Lobsters and crabs figure in his novel Nausea. Havelock Ellis was the author of one of the first written reports to the public about an experience with mescaline (1898). Stanisław Ignacy Witkiewicz, Polish writer, artist and philosopher, experimented with mescaline and described his experience in a 1932 book Nikotyna Alkohol Kokaina Peyotl Morfina Eter. Aldous Huxley described his experience with mescaline in the essay "The Doors of Perception" (1954). Jim Carroll in The Basketball Diaries described using peyote that a friend smuggled from Mexico. Quanah Parker, appointed by the federal government as principal chief of the entire Comanche Nation, advocated the syncretic Native American Church alternative, and fought for the legal use of peyote in the movement's religious practices. Hunter S. Thompson wrote an extremely detailed account of his first use of mescaline in "First Visit with Mescalito", and it appeared in his book Songs of the Doomed, as well as featuring heavily in his novel Fear and Loathing in Las Vegas. Psychedelic research pioneer Alexander Shulgin said he was first inspired to explore psychedelic compounds by a mescaline experience. In 1974, Shulgin synthesized 2C-B, a psychedelic phenylethylamine derivative, structurally similar to mescaline, and one of Shulgin's self-rated most important phenethylamine compounds together with Mescaline, 2C-E, 2C-T-7, and 2C-T-2. Bryan Wynter produced Mars Ascends after trying the substance for the first time. George Carlin mentioned mescaline use during his youth while being interviewed in 2008. Carlos Santana told about his mescaline use in a 1989 Rolling Stone interview. Disney animator Ward Kimball described participating in a study of mescaline and peyote conducted by UCLA in the 1960s. Michael Cera used real mescaline for the movie Crystal Fairy & the Magical Cactus, as expressed in an interview. Philip K. Dick was inspired to write Flow My Tears, the Policeman Said after taking mescaline. Arthur Kleps, a psychologist turned drug legalization advocate and writer whose Neo-American Church defended use of marijuana and hallucinogens such as LSD and peyote for spiritual enlightenment and exploration, bought, in 1960, by mail from Delta Chemical Company in New York 1 g of mescaline sulfate and took 500 mg. He experienced a psychedelic trip that caused profound changes in his life and outlook.
Biology and health sciences
Recreational drugs
Health
42078
https://en.wikipedia.org/wiki/Hydrogen%20cyanide
Hydrogen cyanide
Hydrogen cyanide (formerly known as prussic acid) is a chemical compound with the formula HCN and structural formula . It is a highly toxic and flammable liquid that boils slightly above room temperature, at . HCN is produced on an industrial scale and is a highly valued precursor to many chemical compounds ranging from polymers to pharmaceuticals. Large-scale applications are for the production of potassium cyanide and adiponitrile, used in mining and plastics, respectively. It is more toxic than solid cyanide compounds due to its volatile nature. A solution of hydrogen cyanide in water, represented as HCN, is called hydrocyanic acid. The salts of the cyanide anion are known as cyanides. Whether hydrogen cyanide is an organic compound or not is a topic of debate among chemists, and opinions vary from author to author. Traditionally, it is considered inorganic by a significant number of authors. Contrary to this view, it is considered organic by other authors, because hydrogen cyanide belongs to the class of organic compounds known as nitriles which have the formula , where R is typically organyl group (e.g., alkyl or aryl) or hydrogen. In the case of hydrogen cyanide, the R group is hydrogen H, so the other names of hydrogen cyanide are methanenitrile and formonitrile. Structure and general properties Hydrogen cyanide is a linear molecule, with a triple bond between carbon and nitrogen. The tautomer of HCN is HNC, hydrogen isocyanide. HCN has a faint bitter almond-like odor that some people are unable to detect owing to a recessive genetic trait. The volatile compound has been used as inhalation rodenticide and human poison, as well as for killing whales. Cyanide ions interfere with iron-containing respiratory enzymes. Chemical properties Hydrogen cyanide is weakly acidic with a pKa of 9.2. It partially ionizes in water to give the cyanide anion, . HCN forms hydrogen bonds with its conjugate base, species such as . Hydrogen cyanide reacts with alkenes to give nitriles. The conversion, which is called hydrocyanation, employs nickel complexes as catalysts. Four molecules of HCN will tetramerize into diaminomaleonitrile. Metal cyanides are typically prepared by salt metathesis from alkali metal cyanide salts, but mercuric cyanide is formed from aqueous hydrogen cyanide: History of discovery and naming Hydrogen cyanide was first isolated in 1752 by French chemist Pierre Macquer who converted Prussian blue to an iron oxide plus a volatile component and found that these could be used to reconstitute it. The new component was what is now known as hydrogen cyanide. It was subsequently prepared from Prussian blue by the Swedish chemist Carl Wilhelm Scheele in 1782, and was eventually given the German name Blausäure (lit. "Blue acid") because of its acidic nature in water and its derivation from Prussian blue. In English, it became known popularly as prussic acid. In 1787, the French chemist Claude Louis Berthollet showed that prussic acid did not contain oxygen, an important contribution to acid theory, which had hitherto postulated that acids must contain oxygen (hence the name of oxygen itself, which is derived from Greek elements that mean "acid-former" and are likewise calqued into German as Sauerstoff). In 1811, Joseph Louis Gay-Lussac prepared pure, liquified hydrogen cyanide, and in 1815 he deduced Prussic acid's chemical formula. Etymology The word cyanide for the radical in hydrogen cyanide was derived from its French equivalent, cyanure, which Gay-Lussac constructed from the Ancient Greek word κύανος for dark blue enamel or lapis lazuli, again owing to the chemical’s derivation from Prussian blue. Incidentally, the Greek word is also the root of the English color name cyan. Production and synthesis The most important process is the Andrussow oxidation invented by Leonid Andrussow at IG Farben in which methane and ammonia react in the presence of oxygen at about over a platinum catalyst: In 2006, between 500 million and 1 billion pounds (between 230,000 and 450,000 t) were produced in the US. Hydrogen cyanide is produced in large quantities by several processes and is a recovered waste product from the manufacture of acrylonitrile. Of lesser importance is the Degussa process (BMA process) in which no oxygen is added and the energy must be transferred indirectly through the reactor wall: This reaction is akin to steam reforming, the reaction of methane and water to give carbon monoxide and hydrogen. In the Shawinigan Process, hydrocarbons, e.g. propane, are reacted with ammonia. In the laboratory, small amounts of HCN are produced by the addition of acids to cyanide salts of alkali metals: This reaction is sometimes the basis of accidental poisonings because the acid converts a nonvolatile cyanide salt into the gaseous HCN. Hydrogen cyanide could be obtained from potassium ferricyanide and acid: Historical methods of production The large demand for cyanides for mining operations in the 1890s was met by George Thomas Beilby, who patented a method to produce hydrogen cyanide by passing ammonia over glowing coal in 1892. This method was used until Hamilton Castner in 1894 developed a synthesis starting from coal, ammonia, and sodium yielding sodium cyanide, which reacts with acid to form gaseous HCN. Applications HCN is the precursor to sodium cyanide and potassium cyanide, which are used mainly in gold and silver mining and for the electroplating of those metals. Via the intermediacy of cyanohydrins, a variety of useful organic compounds are prepared from HCN including the monomer methyl methacrylate, from acetone, the amino acid methionine, via the Strecker synthesis, and the chelating agents EDTA and NTA. Via the hydrocyanation process, HCN is added to butadiene to give adiponitrile, a precursor to Nylon-6,6. HCN is used globally as a fumigant against many species of pest insects that infest food production facilities. Both its efficacy and method of application lead to very small amounts of the fumigant being used compared to other toxic substances used for the same purpose. Using HCN as a fumigant also has less environmental impact, compared to some other fumigants such as sulfuryl fluoride, and methyl bromide. Occurrence HCN is obtainable from fruits that have a pit, such as cherries, apricots, apples, and nuts such as bitter almonds, from which almond oil and extract is made. Many of these pits contain small amounts of cyanohydrins such as mandelonitrile and amygdalin, which slowly release hydrogen cyanide. One hundred grams of crushed apple seeds can yield about 70 mg of HCN. The roots of cassava plants contain cyanogenic glycosides such as linamarin, which decompose into HCN in yields of up to 370 mg per kilogram of fresh root. Some millipedes, such as Harpaphe haydeniana, Desmoxytes purpurosea, and Apheloria release hydrogen cyanide as a defense mechanism, as do certain insects, such as burnet moths and the larvae of Paropsisterna eucalyptus. Hydrogen cyanide is contained in the exhaust of vehicles, and in smoke from burning nitrogen-containing plastics. On Titan HCN has been measured in Titan's atmosphere by four instruments on the Cassini space probe, one instrument on Voyager, and one instrument on Earth. One of these measurements was in situ, where the Cassini spacecraft dipped between above Titan's surface to collect atmospheric gas for mass spectrometry analysis. HCN initially forms in Titan's atmosphere through the reaction of photochemically produced methane and nitrogen radicals which proceed through the H2CN intermediate, e.g., (CH3 + N → H2CN + H → HCN + H2). Ultraviolet radiation breaks HCN up into CN + H; however, CN is efficiently recycled back into HCN via the reaction CN + CH4 → HCN + CH3. On the young Earth It has been postulated that carbon from a cascade of asteroids (known as the Late Heavy Bombardment), resulting from interaction of Jupiter and Saturn, blasted the surface of young Earth and reacted with nitrogen in Earth's atmosphere to form HCN. In mammals Some authors have shown that neurons can produce hydrogen cyanide upon activation of their opioid receptors by endogenous or exogenous opioids. They have also shown that neuronal production of HCN activates NMDA receptors and plays a role in signal transduction between neuronal cells (neurotransmission). Moreover, increased endogenous neuronal HCN production under opioids was seemingly needed for adequate opioid analgesia, as analgesic action of opioids was attenuated by HCN scavengers. They considered endogenous HCN to be a neuromodulator. It has also been shown that, while stimulating muscarinic cholinergic receptors in cultured pheochromocytoma cells increases HCN production, in a living organism (in vivo) muscarinic cholinergic stimulation actually decreases HCN production. Leukocytes generate HCN during phagocytosis, and can kill bacteria, fungi, and other pathogens by generating several different toxic chemicals, one of which is hydrogen cyanide. The vasodilatation caused by sodium nitroprusside has been shown to be mediated not only by NO generation, but also by endogenous cyanide generation, which adds not only toxicity, but also some additional antihypertensive efficacy compared to nitroglycerine and other non-cyanogenic nitrates which do not cause blood cyanide levels to rise. HCN is a constituent of tobacco smoke. HCN and the origin of life Hydrogen cyanide has been discussed as a precursor to amino acids and nucleic acids, and is proposed to have played a part in the origin of life. Although the relationship of these chemical reactions to the origin of life theory remains speculative, studies in this area have led to discoveries of new pathways to organic compounds derived from the condensation of HCN (e.g. Adenine). That's why scientists who search for life on planets beyond Earth, the primary factors they examine, after confirming suitable temperatures and the presence of water, are molecules like hydrogen cyanide. In space HCN has been detected in the interstellar medium and in the atmospheres of carbon stars. Since then, extensive studies have probed formation and destruction pathways of HCN in various environments and examined its use as a tracer for a variety of astronomical species and processes. HCN can be observed from ground-based telescopes through a number of atmospheric windows. The J=1→0, J=3→2, J= 4→3, and J=10→9 pure rotational transitions have all been observed. HCN is formed in interstellar clouds through one of two major pathways: via a neutral-neutral reaction (CH2 + N → HCN + H) and via dissociative recombination (HCNH+ + e− → HCN + H). The dissociative recombination pathway is dominant by 30%; however, the HCNH+ must be in its linear form. Dissociative recombination with its structural isomer, H2NC+, exclusively produces hydrogen isocyanide (HNC). HCN is destroyed in interstellar clouds through a number of mechanisms depending on the location in the cloud. In photon-dominated regions (PDRs), photodissociation dominates, producing CN (HCN + ν → CN + H). At further depths, photodissociation by cosmic rays dominate, producing CN (HCN + cr → CN + H). In the dark core, two competing mechanisms destroy it, forming HCN+ and HCNH+ (HCN + H+ → HCN+ + H; HCN + HCO+ → HCNH+ + CO). The reaction with HCO+ dominates by a factor of ~3.5. HCN has been used to analyze a variety of species and processes in the interstellar medium. It has been suggested as a tracer for dense molecular gas and as a tracer of stellar inflow in high-mass star-forming regions. Further, the HNC/HCN ratio has been shown to be an excellent method for distinguishing between PDRs and X-ray-dominated regions (XDRs). On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). In February 2016, it was announced that traces of hydrogen cyanide were found in the atmosphere of the hot Super-Earth 55 Cancri e with NASA's Hubble Space Telescope. On 14 December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." As a poison and chemical weapon In World War I, hydrogen cyanide was used by the French from 1916 as a chemical weapon against the Central Powers, and by the United States and Italy in 1918. It was not found to be effective enough due to weather conditions. The gas is lighter than air and rapidly disperses up into the atmosphere. Rapid dilution made its use in the field impractical. In contrast, denser agents such as phosgene or chlorine tended to remain at ground level and sank into the trenches of the Western Front's battlefields. Compared to such agents, hydrogen cyanide had to be present in higher concentrations in order to be fatal. A hydrogen cyanide concentration of 100–200 ppm in breathing air will kill a human within 10 to 60 minutes. A hydrogen cyanide concentration of 2000 ppm (about 2380 mg/m3) will kill a human in about one minute. The toxic effect is caused by the action of the cyanide ion, which halts cellular respiration. It acts as a non-competitive inhibitor for an enzyme in mitochondria called cytochrome c oxidase. As such, hydrogen cyanide is commonly listed among chemical weapons as a blood agent. The Chemical Weapons Convention lists it under Schedule 3 as a potential weapon which has large-scale industrial uses. Signatory countries must declare manufacturing plants that produce more than 30 metric tons per year, and allow inspection by the Organisation for the Prohibition of Chemical Weapons. Perhaps its most infamous use is (German: Cyclone B, with the B standing for – prussic acid; also, to distinguish it from an earlier product later known as Zyklon A), used in the Nazi German extermination camps of Majdanek and Auschwitz-Birkenau during World War II to kill Jews and other persecuted minorities en masse as part of their Final Solution genocide program. Hydrogen cyanide was also used in the camps for delousing clothing in attempts to eradicate diseases carried by lice and other parasites. One of the original Czech producers continued making Zyklon B under the trademark "Uragan D2" until around 2015. During World War II, the US considered using it, along with cyanogen chloride, as part of Operation Downfall, the planned invasion of Japan, but President Harry Truman decided against it, instead using the atomic bombs developed by the secret Manhattan Project. Hydrogen cyanide was also the agent employed in judicial execution in some U.S. states, where it was produced during the execution by the action of sulfuric acid on sodium cyanide or potassium cyanide. Under the name prussic acid, HCN has been used as a killing agent in whaling harpoons, although it proved quite dangerous to the crew deploying it, and it was quickly abandoned. From the middle of the 18th century it was used in a number of poisoning murders and suicides. Hydrogen cyanide gas in air is explosive at concentrations above 5.6%.
Physical sciences
Hydrogen compounds
Chemistry
42079
https://en.wikipedia.org/wiki/Potassium%20ferrocyanide
Potassium ferrocyanide
Potassium hexacyanidoferrate(II) is the inorganic compound with formula K4[Fe(CN)6]·3H2O. It is the potassium salt of the coordination complex [Fe(CN)6]4−. This salt forms lemon-yellow monoclinic crystals. Synthesis In 1752, the French chemist Pierre Joseph Macquer (1718–1784) first reported the preparation of Potassium hexacyanidoferrate(II), which he achieved by reacting Prussian blue (iron(III) ferrocyanide) with potassium hydroxide. Modern production Potassium hexacyanidoferrate(II) is produced industrially from hydrogen cyanide, iron(II) chloride, and calcium hydroxide, the combination of which affords Ca2[Fe(CN)6]·11H2O. This solution is then treated with potassium salts to precipitate the mixed calcium-potassium salt CaK2[Fe(CN)6], which in turn is treated with potassium carbonate to give the tetrapotassium salt. Historical production Historically, the compound was manufactured from nitrogenous organic material, iron filings, and potassium carbonate. Common nitrogen and carbon sources were torrified horn, leather scrap, offal, or dried blood. It was also obtained commercially from gasworks spent oxide (purification of city gas from hydrogen cyanide). Chemical reactions Treatment of potassium hexacyanidoferrate(II) with nitric acid gives H2[Fe(NO)(CN)5]. After neutralization of this intermediate with sodium carbonate, red crystals of sodium nitroprusside can be selectively crystallized. Upon treatment with chlorine gas, potassium hexacyanidoferrate(II) converts to potassium hexacyanidoferrate(III): 2 K4[Fe(CN)6] + Cl2 → 2 K3[Fe(CN)6] + 2 KCl This reaction can be used to remove potassium hexacyanidoferrate(II) from a solution. A famous reaction involves treatment with ferric salts, most commonly Iron(III) chloride, to give Prussian blue. In the reaction with Iron(III) chloride, producing Potassium chloride as a side-product: 3 K4[Fe(CN)6] + 4 FeCl3 → Fe4[Fe(CN)6]3 + 12 KCl With the composition Fe[Fe(CN)], this insoluble but deeply coloured material is the blue of blueprinting, as well as on many famous paintings such as The Great Wave off Kanagawa and The Starry Night. Applications Potassium hexacyanidoferrate(II) finds many niche applications in industry. It and the related sodium salt are widely used as anticaking agents for both road salt and table salt. The potassium and sodium hexacyanidoferrates(II) are also used in the purification of tin and the separation of copper from molybdenum ores. Potassium hexacyanidoferrate(II) is used in the production of wine and citric acid. In the EU, hexacyanidoferrates(II) (E 535–538) were, as of 2017, solely authorised in two food categories as salt additives. It can also be used in animal feed. In the laboratory, potassium hexacyanidoferrate(II) is used to determine the concentration of potassium permanganate, a compound often used in titrations based on redox reactions. Potassium hexacyanidoferrate(II) is used in a mixture with potassium ferricyanide and phosphate buffered solution to provide a buffer for beta-galactosidase, which is used to cleave X-Gal, giving a bright blue visualization where an antibody (or other molecule), conjugated to Beta-gal, has bonded to its target. On reacting with Fe(3) it gives a Prussian blue colour. Thus it is used as an identifying reagent for iron in labs. Potassium hexacyanidoferrate(II) can be used as a fertilizer for plants. Prior to 1900, before the invention of the Castner process, potassium hexacyanidoferrate(II) was the most important source of alkali metal cyanides. In this historical process, potassium cyanide was produced by decomposing potassium hexacyanidoferrate(II): K4[Fe(CN)6] → 4 KCN + FeC2 + N2 Structure Like other metal cyanides, solid potassium hexacyanidoferrate(II), both as the hydrate and anhydrous salts, has a complicated polymeric structure. The polymer consists of octahedral [Fe(CN)6]4− centers crosslinked with K+ ions that are bound to the CN ligands. The K+---NC linkages break when the solid is dissolved in water. Toxicity The toxicity in rats is low, with lethal dose (LD50) at 6400 mg/kg. The kidneys are the organ for ferrocyanide toxicity.
Physical sciences
Cyanide salts
Chemistry
42080
https://en.wikipedia.org/wiki/Atlantic%20cod
Atlantic cod
The Atlantic cod (: cod; Gadus morhua) is a fish of the family Gadidae, widely consumed by humans. It is also commercially known as cod or codling. In the western Atlantic Ocean, cod has a distribution north of Cape Hatteras, North Carolina, and around both coasts of Greenland and the Labrador Sea; in the eastern Atlantic, it is found from the Bay of Biscay north to the Arctic Ocean, including the Baltic Sea, the North Sea, Sea of the Hebrides, areas around Iceland and the Barents Sea. Atlantic cod can live for up to 25 years and typically grow up to , but individuals in excess of and have been caught. They will attain sexual maturity between ages two and eight with this varying between populations and has varied over time. Colouring is brown or green, with spots on the dorsal side, shading to silver ventrally. A stripe along its lateral line (used to detect vibrations) is clearly visible. Its habitat ranges from the coastal shoreline down to along the continental shelf. Atlantic cod is one of the most heavily fished species. Atlantic cod was fished for a thousand years by north European fishers who followed it across the North Atlantic Ocean to North America. It supported the US and Canada fishing economy until 1992, when the Canadian Government implemented a ban on fishing cod. Several cod stocks collapsed in the 1990s (decline of more than 95% of maximum historical biomass) and have failed to fully recover even with the cessation of fishing. This absence of the apex predator has led to a trophic cascade in many areas. Many other cod stocks remain at risk. The Atlantic cod is labelled vulnerable on the IUCN Red List of Threatened Species, per a 1996 assessment that the IUCN notes needs updating. A 2013 assessment covering only Europe shows the Atlantic cod has rebounded in Europe, and it has been relabelled least concern. Dry cod may be prepared as unsalted stockfish, and as cured salt cod or clipfish. Taxonomy The Atlantic cod is one of three cod species in the genus Gadus along with Pacific cod and Greenland cod. A variety of fish species are colloquially known as cod, but they are not all classified within the Gadus, though some are in the Atlantic cod family, Gadidae. Behaviour Shoaling Atlantic cod are a shoaling species and move in large, size-structured aggregations. Larger fish act as scouts and lead the shoal's direction, particularly during post spawning migrations inshore for feeding. Cod actively feed during migration and changes in shoal structure occur when food is encountered. Shoals are generally thought to be relatively leaderless, with all fish having equal status and an equal distribution of resources and benefits. However, some studies suggest that leading fish gain certain feeding benefits. One study of a migrating Atlantic cod shoal showed significant variability in feeding habits based on size and position in the shoal. Larger scouts consumed a more variable, higher quantity of food, while trailing fish had less variable diets and consumed less food. Fish distribution throughout the shoal seems to be dictated by fish size, and ultimately, the smaller lagging fish likely benefit from shoaling because they are more successful in feeding in the shoal than they would be if migrating individually, due to social facilitation. Predation Atlantic cod are apex predators in the Baltic and adults are generally free from the concerns of predation. Juvenile cod, however, may serve as prey for adult cod, which sometimes practice cannibalism. Juvenile cod make substrate decisions based on risk of predation. Substrates refer to different feeding and swimming environments. Without apparent risk of predation, juvenile cod demonstrated a preference for finer-grained substrates such as sand and gravel-pebble. However, in the presence of a predator, they preferred to seek safety in the space available between stones of a cobble substrate. Selection of cobble significantly reduces the risk of predation. Without access to cobble, the juvenile cod simply tries to escape a predator by fleeing. Additionally, juvenile Atlantic cod vary their behaviour according to the foraging behaviour of predators. In the vicinity of a passive predator, cod behaviour changes very little. The juveniles prefer finer-grained substrates and otherwise avoid the safer kelp, steering clear of the predator. In contrast, in the presence of an actively foraging predator, juveniles are highly avoidant and hide in cobble or in kelp if cobble is unavailable. Heavy fishing of cod in the 1990s and the collapse of American and Canadian cod stocks resulted in trophic cascades. As cod are apex predators, overfishing them removed a significant predatory pressure on other Atlantic fish and crustacean species. Population-limiting effects on several species including American lobsters, crabs, and shrimp from cod predation have decreased significantly, and the abundance of these species and their increasing range serve as evidence of the Atlantic cod's role as a major predator rather than prey. Swimming Atlantic cod have been recorded to swim at speeds of a minimum of and a maximum of with a mean swimming speed of . In one hour, cod have been recorded to cover a mean range of . Swimming speed was higher during the day than at night. This is reflected in the fact that cod more actively search for food during the day. Cod likely modify their activity pattern according to the length of daylight, thus activity varies with time of year. Response to changing temperatures Swimming and physiological behaviours change in response to fluctuations in water temperature. Respirometry experiments show that heart rates of Atlantic cod change drastically with changes in temperature of only a few degrees. A rise in water temperature causes marked increases in cod swimming activity. Cod typically avoid new temperature conditions, and the temperatures can dictate where they are distributed in water. They prefer to be deeper, in colder water layers during the day, and in shallower, warmer water layers at night. These fine-tuned behavioural changes to water temperature are driven by an effort to maintain homeostasis to preserve energy. This is demonstrated by the fact that a decrease of only caused a highly costly increase in metabolic rate of 15–30%. Feeding and diet The diet of the Atlantic cod consists of fish such as herring, capelin (in the Eastern Atlantic Ocean), and sand eels, as well as mollusks, tunicates, comb jellies, crustaceans, echinoderms and sea worms. Stomach sampling studies have discovered that small Atlantic cod feed primarily on crustaceans, while large Atlantic cod feed primarily on fish. In certain regions, the main food source is decapods with fish as a complementary food item in the diet. Wild Atlantic cod throughout the North Sea depend, to a large extent, on commercial fish species also used in fisheries, such as Atlantic mackerel, haddock, whiting, Atlantic herring, European plaice, and common sole, making fishery manipulation of cod significantly easier. Ultimately, food selection by cod is affected by the food item size relative to their own size. However, providing for size, cod do exhibit food preference and are not simply driven by availability. Atlantic cod practice some cannibalism. In the southern North Sea, 1–2% (by weight) of stomach contents for cod larger than consisted of juvenile cod. In the northern North Sea, cannibalism was higher, at 10%. Other reports of cannibalism have estimated as high as 56% of the diet consists of juvenile cod. When hatched, cod larvae are altricial, entirely dependent on a yolk sac for sustenance until mouth opening at ~24 degree days. The stomach generally develops at around 240 degree days. Before this point the intestine is the main point of food digestion using pancreatic enzymes such as trypsin. Reproduction Atlantic cod will attain sexual maturity between ages two and eight with this varying between different populations and has also varied over time with a population. Their gonads take several months to develop and most populations will spawn from January to May. For many populations, the spawning grounds are located in a different area than the feeding grounds so require the fish to migrate in order to spawn. On the spawning area, males and females will form large schools. Based on behavioral observations of cod, the cod mating system has been likened to a lekking system, which is characterized by males aggregating and establishing dominance hierarchies, at which point females may visit and choose a spawning partner based on status and sexual characteristics. Evidence suggests male sound production and other sexually selected characteristics allow female cod to actively choose a spawning partner. Males also exhibit aggressive interactions for access to females. Atlantic cod are batch spawners, in which females will spawn approximately 5–20 batches of eggs over a period of time with 2–4 days between the release of each batch. Each female will spawn between 2 hundred thousand and 15 million eggs, with larger females spawning more eggs. Females release gametes in a ventral mount, and males then fertilize the released eggs. The eggs and newly hatched larvae float freely in the water and will drift with the current, with some populations relying upon the current to transport the larvae to nursery areas. Parasites Atlantic cod act as intermediate, paratenic, or definitive hosts to a large number of parasite species: 107 taxa listed by Hemmingsen and MacKenzie (2001) and seven new records by Perdiguero-Alonso et al. (2008). The predominant groups of cod parasites in the northeast Atlantic were trematodes (19 species) and nematodes (13 species), including larval anisakids, which comprised 58.2% of the total number of individuals. Parasites of Atlantic cod include copepods, digeneans, monogeneans, acanthocephalans, cestodes, nematodes, myxozoans, and protozoans. Fisheries Atlantic cod has been targeted by humans for food for thousands of years, and with the advent of modern fishing technology in the 1950s there was a rapid rise in landings. Cod is caught using a variety of fishing gears including bottom trawls, demersal longlines, Danish seine, jigging and hand lines. The quantity of cod landed from fisheries has been recorded by many countries from around the 1950s and attempts have been made to reconstruct historical catches going back hundreds of years. ICES and NAFO collects landings data, alongside other data, which is used to assess the status of the population against management objectives. The landings in the eastern Atlantic frequently exceeds 1 million tonnes annually from across 16 populations/management units with landings from the Northeast Atlantic cod population and Iceland accounting for the majority of the landings, Since 1992, when the cod moratorium took effect in Canada, landings in the western Atlantic have been considerably lower than in the eastern Atlantic, generally being less than 50,000 tonnes annually. Northwest Atlantic cod The Northwest Atlantic cod has been regarded as heavily overfished throughout its range, resulting in a crash in the fishery in the United States and Canada during the early 1990s. Newfoundland's northern cod fishery can be traced back to the 16th century. On average, about of cod were landed annually until the 1960s, when advances in technology enabled factory trawlers to take larger catches. By 1968, landings for the fish peaked at before a gradual decline set in. With the reopening of the limited cod fisheries in 2006, nearly of cod were hauled in. In 2007, offshore cod stocks were estimated at 1% of what they were in 1977.Technologies that contributed to the collapse of Atlantic cod include engine-powered vessels and frozen food compartments aboard ships. Engine-powered vessels had larger nets, greater range, and better navigation. The capacity to catch fish became limitless. In addition, sonar technology gave an edge to detecting and catching fish. Sonar was originally developed during World War II to locate enemy submarines, but was later applied to locating schools of fish. These new technologies, as well as bottom trawlers that destroyed entire ecosystems, contributed to the collapse of Atlantic cod. They were vastly different from old techniques used, such as hand lines and long lines. The fishery has only recently begun to recover, and may never fully recover because of a possibly stable change in the food chain. Atlantic cod was a top-tier predator, along with haddock, flounder and hake, feeding upon smaller prey, such as herring, capelin, shrimp, and snow crab. With the large predatory fish removed, their prey have had population explosions and have become the top predators, affecting the survival rates of cod eggs and fry. In the winter of 2011–2012, the cod fishery succeeded in convincing NOAA to postpone for one year the planned 82% reduction in catch limits. Instead, the limit was reduced by 22%. The fishery brought in $15.8 million in 2010, coming second behind Georges Bank haddock among the region's 20 regulated bottom-dwelling groundfish. Data released in 2011 indicated that even closing the fishery would not allow populations to rebound by 2014 to levels required under federal law. Restrictions on cod effectively limit fishing on other groundfish species with which the cod swim, such as flounder and haddock. Northeast Atlantic cod The Northeast Atlantic has the world's largest population of cod. By far, the largest part of this population is the Northeast Arctic cod, as it is labelled by the ICES, or the Arcto-Norwegian cod stock, also referred to as skrei, a Norwegian name meaning something like "the wanderer", distinguishing it from coastal cod. The Northeast Arctic cod is found in the Barents Sea area. This stock spawns in March and April along the Norwegian coast, about 40% around the Lofoten archipelago. Newly hatched larvae drift northwards with the coastal current while feeding on larval copepods. By summer, the young cod reach the Barents Sea, where they stay for the rest of their lives, until their spawning migration. As the cod grow, they feed on krill and other small crustaceans and fish. Adult cod primarily feed on fish such as capelin and herring. The northeast Arctic cod also show cannibalistic behaviour. Estimated stock size was in 2008. The North Sea cod stock is primarily fished by European Union member states, the United Kingdom and Norway. In 1999, the catch was divided among Denmark (31%), Scotland (25%), the rest of the United Kingdom (12%), the Netherlands (10%), Belgium, Germany and Norway (17%). In the 1970s, the annual catch rose to between . Due to concerns about overfishing, catch quotas were repeatedly reduced in the 1980s and 1990s. In 2003, ICES stated a high risk existed of stock collapse if then current exploitation levels continued, and recommended a moratorium on catching Atlantic cod in the North Sea during 2004. However, agriculture and fisheries ministers from the Council of the European Union endorsed the EU/Norway Agreement and set the total allowable catch at . Seafood sustainability guides, such as the Monterey Bay Aquarium's Seafood Watch, often recommend environmentally conscious customers not purchase Atlantic cod. The stock of Northeast Arctic cod was more than four million tons following World War II, but declined to a historic minimum of in 1983. The catch reached a historic maximum of in 1956, and bottomed out at in 1990. Since 2000, the spawning stock has increased quite quickly, helped by low fishing pressure. The total catch in 2012 was , the major fishers being Norway and Russia. Baltic cod Decades of overfishing in combination with environmental problems, namely little water exchange, low salinity and oxygen-depletion at the sea bottom, caused major threats to the Baltic cod stocks. There are at least two populations of cod in the Baltic Sea: One large population that spawns east of Bornholm and one population spawning west of Bornholm. Eastern Baltic cod is genetically distinct and adapted to the brackish environment. Adaptations include differences in hemoglobin type, osmoregulatory capacity, egg buoyancy, sperm swimming characteristics and spawning season. The adaptive responses to the environmental conditions in the Baltic Sea may contribute to an effective reproductive barrier, and thus, eastern Baltic cod can be viewed as an example of ongoing speciation. Due to drastically low cod population sizes, commercial fishing of eastern Baltic cod is prohibited since 2019. However, unfavourable environmental conditions in the eastern Baltic Sea, i.e., low salinity and increasing oxygen-depletion at the sea bottom, led to presently only the Bornholm Basin (Southern Baltic Sea) having sufficient conditions for successful reproduction of eastern Baltic cod. The western Baltic cod consists of one or several small subpopulations that are genetically more similar to the North Sea cod. In the Arkona basin (located off Cape Arkona, Rügen), spawning and migrating cod from both the eastern and western stocks intermingle in proportions that vary seasonally. The immigration of eastern cod into the western Baltic management unit may mask a poor state of the populations in the western management unit.
Biology and health sciences
Acanthomorpha
Animals
42114
https://en.wikipedia.org/wiki/Salmonella
Salmonella
Salmonella is a genus of rod-shaped, (bacillus) gram-negative bacteria of the family Enterobacteriaceae. The two known species of Salmonella are Salmonella enterica and Salmonella bongori. S. enterica is the type species and is further divided into six subspecies that include over 2,650 serotypes. Salmonella was named after Daniel Elmer Salmon (1850–1914), an American veterinary surgeon. Salmonella species are non-spore-forming, predominantly motile enterobacteria with cell diameters between about 0.7 and 1.5 μm, lengths from 2 to 5 μm, and peritrichous flagella (all around the cell body, allowing them to move). They are chemotrophs, obtaining their energy from oxidation and reduction reactions, using organic sources. They are also facultative anaerobes, capable of generating adenosine triphosphate with oxygen ("aerobically") when it is available, or using other electron acceptors or fermentation ("anaerobically") when oxygen is not available. Salmonella species are intracellular pathogens, of which certain serotypes cause illness such as salmonellosis. Most infections are due to the ingestion of food contaminated by feces. Typhoidal Salmonella serotypes can only be transferred between humans and can cause foodborne illness as well as typhoid and paratyphoid fever. Typhoid fever is caused by typhoidal Salmonella invading the bloodstream, as well as spreading throughout the body, invading organs, and secreting endotoxins (the septic form). This can lead to life-threatening hypovolemic shock and septic shock, and requires intensive care, including antibiotics. Nontyphoidal Salmonella serotypes are zoonotic and can be transferred from animals and between humans. They usually invade only the gastrointestinal tract and cause salmonellosis, the symptoms of which can be resolved without antibiotics. However, in sub-Saharan Africa, nontyphoidal Salmonella can be invasive and cause paratyphoid fever, which requires immediate antibiotic treatment. Taxonomy The genus Salmonella is part of the family of Enterobacteriaceae. Its taxonomy has been revised and has the potential to confuse. The genus comprises two species, S. bongori and S. enterica, the latter of which is divided into six subspecies: S. e. enterica, S. e. salamae, S. e. arizonae, S. e. diarizonae, S. e. houtenae, and S. e. indica. The taxonomic group contains more than 2500 serotypes (also serovars) defined on the basis of the somatic O (lipopolysaccharide) and flagellar H antigens (the Kauffman–White classification). The full name of a serotype is given as, for example, Salmonella enterica subsp. enterica serotype Typhimurium, but can be abbreviated to Salmonella Typhimurium. Further differentiation of strains to assist clinical and epidemiological investigation may be achieved by antibiotic sensitivity testing and by other molecular biology techniques such as pulsed-field gel electrophoresis, multilocus sequence typing, and, increasingly, whole genome sequencing. Historically, salmonellae have been clinically categorized as invasive (typhoidal) or non-invasive (nontyphoidal salmonellae) based on host preference and disease manifestations in humans. History Salmonella was first visualized in 1880 by Karl Eberth in the Peyer's patches and spleens of typhoid patients. Four years later, Georg Theodor Gaffky was able to grow the pathogen in pure culture. A year after that, medical research scientist Theobald Smith discovered what would be later known as Salmonella enterica (var. Choleraesuis). At the time, Smith was working as a research laboratory assistant in the Veterinary Division of the United States Department of Agriculture. The division was under the administration of Daniel Elmer Salmon, a veterinary pathologist. Initially, Salmonella Choleraesuis was thought to be the causative agent of hog cholera, so Salmon and Smith named it "Hog-cholera bacillus". The name Salmonella was not used until 1900, when Joseph Leon Lignières proposed that the pathogen discovered by Salmon's group be called Salmonella in his honor. In the late 1930s, Australian bacteriologist Nancy Atkinson established a salmonella typing laboratory – one of only three in the world at the time – at the Government of South Australia's Laboratory of Pathology and Bacteriology in Adelaide (later the Institute of Medical and Veterinary Science). It was here that Atkinson described multiple new strains of salmonella, including Salmonella Adelaide, which was isolated in 1943. Atkinson published her work on salmonellas in 1957. Serotyping Serotyping is done by mixing cells with antibodies for a particular antigen. It can give some idea about risk. A 2014 study showed that S. Reading is very common among young turkey samples, but it is not a significant contributor to human salmonellosis. Serotyping can assist in identifying the source of contamination by matching serotypes in people with serotypes in the suspected source of infection. Appropriate prophylactic treatment can be identified from the known antibiotic resistance of the serotype. Newer methods of "serotyping" include xMAP and real-time PCR, two methods based on DNA sequences instead of antibody reactions. These methods can be potentially faster, thanks to advances in sequencing technology. These "molecular serotyping" systems actually perform genotyping of the genes that determine surface antigens. Detection, culture, and growth conditions Most subspecies of Salmonella produce hydrogen sulfide, which can readily be detected by growing them on media containing ferrous sulfate, such as is used in the triple sugar iron test. Most isolates exist in two phases, a motile phase and a non-motile phase. Cultures that are nonmotile upon primary culture may be switched to the motile phase using a Craigie tube or ditch plate. RVS broth can be used to enrich for Salmonella species for detection in a clinical sample. Salmonella can also be detected and subtyped using multiplex or real-time polymerase chain reaction (qPCR) from extracted Salmonella DNA. Mathematical models of Salmonella growth kinetics have been developed for chicken, pork, tomatoes, and melons. Salmonella reproduce asexually with a cell division interval of 40 minutes. Salmonella species lead predominantly host-associated lifestyles, but the bacteria were found to be able to persist in a bathroom setting for weeks following contamination, and are frequently isolated from water sources, which act as bacterial reservoirs and may help to facilitate transmission between hosts. Salmonella is notorious for its ability to survive desiccation and can persist for years in dry environments and foods. The bacteria are not destroyed by freezing, but UV light and heat accelerate their destruction. They perish after being heated to for 90 min, or to for 12 min, although if inoculated in high fat, high liquid substances like peanut butter, they gain heat resistance and can survive up to for 30 min. To protect against Salmonella infection, heating food to an internal temperature of is recommended. Salmonella species can be found in the digestive tracts of humans and animals, especially reptiles. Salmonella on the skin of reptiles or amphibians can be passed to people who handle the animals. Food and water can also be contaminated with the bacteria if they come in contact with the feces of infected people or animals. Nomenclature Initially, each Salmonella "species" was named according to clinical consideration, for example Salmonella typhi-murium (mouse-typhoid), S. cholerae-suis (pig-cholera). After host specificity was recognized not to exist for many species, new strains received species names according to the location at which the new strain was isolated. In 1987, Le Minor and Popoff used molecular findings to argue that Salmonella consisted of only one species, S. enterica, turning former "species" names into serotypes. In 1989, Reeves et al. proposed that the serotype V should remain its own species, resurrecting the name S. bongori. The current (by 2005) nomenclature has thus taken shape, with six recognised subspecies under S. enterica: enterica (serotype I), salamae (serotype II), arizonae (IIIa), diarizonae (IIIb), houtenae (IV), and indica (VI). As specialists in infectious disease are not familiar with the new nomenclature, the traditional nomenclature remains common. The serotype or serovar is a classification of Salmonella based on antigens that the organism presents. The Kauffman–White classification scheme differentiates serological varieties from each other. Serotypes are usually put into subspecies groups after the genus and species, with the serotypes/serovars capitalized, but not italicized: An example is Salmonella enterica serovar Typhimurium. More modern approaches for typing and subtyping Salmonella include DNA-based methods such as pulsed field gel electrophoresis, multiple-loci VNTR analysis, multilocus sequence typing, and multiplex-PCR-based methods. In 2005, a third species, Salmonella subterranea, was proposed, but according to the World Health Organization, the bacterium reported does not belong in the genus Salmonella. In 2016, S. subterranea was proposed to be assigned to Atlantibacter subterranea, but LPSN rejects it as an invalid publication, as it was made outside of IJSB and IJSEM. GTDB and NCBI agree with the 2016 reassignment. GTDB RS202 reports that S. arizonae, S. diarizonae, and S. houtenae should be species of their own. Pathogenicity Salmonella species are facultative intracellular pathogens. Salmonella can invade different cell types, including epithelial cells, M cells, macrophages, and dendritic cells. As facultative anaerobic organism, Salmonella uses oxygen to make adenosine triphosphate (ATP) in aerobic environments (i.e., when oxygen is available). However, in anaerobic environments (i.e., when oxygen is not available) Salmonella produces ATP by fermentation — that is, by substituting, instead of oxygen, at least one of four electron acceptors at the end of the electron transport chain: sulfate, nitrate, sulfur, or fumarate (all of which are less efficient than oxygen). Most infections are due to ingestion of food contaminated by animal feces, or by human feces (for example, from the hands of a food-service worker at a commercial eatery). Salmonella serotypes can be divided into two main groups—typhoidal and nontyphoidal. Typhoidal serotypes include Salmonella Typhi and Salmonella Paratyphi A, which are adapted to humans and do not occur in other animals. Nontyphoidal serotypes are more common, and usually cause self-limiting gastrointestinal disease. They can infect a range of animals, and are zoonotic, meaning they can be transferred between humans and other animals. Salmonella pathogenicity and host interaction has been studied extensively since the 2010s. Most of the important virulent genes of Salmonella are encoded in five pathogenicity islands — the so-called Salmonella pathogenicity islands (SPIs). These are chromosomal encoded and make a significant contribution to bacterial-host interaction. More traits, like plasmids, flagella or biofilm-related proteins, can contribute in the infection. SPIs are regulated by complex and fine-tuned regulatory networks that allow the gene expression only in the presence of the right environmental stresses. Molecular modeling and active site analysis of SdiA homolog, a putative quorum sensor for Salmonella typhimurium pathogenicity, reveals the specific binding patterns of AHL transcriptional regulators. It is also known that Salmonella plasmid virulence gene spvB enhances bacterial virulence by inhibiting autophagy. Typhoidal Salmonella Typhoid fever is caused by Salmonella serotypes which are strictly adapted to humans or higher primates—these include Salmonella Typhi, Paratyphi A, Paratyphi B, and Paratyphi C. In the systemic form of the disease, salmonellae pass through the lymphatic system of the intestine into the blood of the patients (typhoid form) and are carried to various organs (liver, spleen, kidneys) to form secondary foci (septic form). Endotoxins first act on the vascular and nervous apparatus, resulting in increased permeability and decreased tone of the vessels, upset of thermal regulation, and vomiting and diarrhoea. In severe forms of the disease, enough liquid and electrolytes are lost to upset the water-salt metabolism, decrease the circulating blood volume and arterial pressure, and cause hypovolemic shock. Septic shock may also develop. Shock of mixed character (with signs of both hypovolemic and septic shock) is more common in severe salmonellosis. Oliguria and azotemia may develop in severe cases as a result of renal involvement due to hypoxia and toxemia. Nontyphoidal Salmonella Non-invasive Infection with nontyphoidal serotypes of Salmonella generally results in food poisoning. Infection usually occurs when a person ingests foods that contain a high concentration of the bacteria. Infants and young children are much more susceptible to infection, easily achieved by ingesting a small number of bacteria. In infants, infection through inhalation of bacteria-laden dust is possible. The organisms enter through the digestive tract and must be ingested in large numbers to cause disease in healthy adults. An infection can only begin after living salmonellae (not merely Salmonella-produced toxins) reach the gastrointestinal tract. Some of the microorganisms are killed in the stomach, while the surviving ones enter the small intestine and multiply in tissues. Gastric acidity is responsible for the destruction of the majority of ingested bacteria, but Salmonella has evolved a degree of tolerance to acidic environments that allows a subset of ingested bacteria to survive. Bacterial colonies may also become trapped in mucus produced in the esophagus. By the end of the incubation period, the nearby host cells are poisoned by endotoxins released from the dead salmonellae. The local response to the endotoxins is enteritis and gastrointestinal disorder. About 2,000 serotypes of nontyphoidal Salmonella are known, which may be responsible for as many as 1.4 million illnesses in the United States each year. People who are at risk for severe illness include infants, elderly, organ-transplant recipients, and the immunocompromised. Invasive While in developed countries, nontyphoidal serotypes present mostly as gastrointestinal disease, in sub-Saharan Africa, these serotypes can create a major problem in bloodstream infections, and are the most commonly isolated bacteria from the blood of those presenting with fever. Bloodstream infections caused by nontyphoidal salmonellae in Africa were reported in 2012 to have a case fatality rate of 20–25%. Most cases of invasive nontyphoidal Salmonella infection (iNTS) are caused by Salmonella enterica Typhimurium or Salmonella enterica Enteritidis. A new form of Salmonella Typhimurium (ST313) emerged in the southeast of the African continent 75 years ago, followed by a second wave which came out of central Africa 18 years later. This second wave of iNTS possibly originated in the Congo Basin, and early in the event picked up a gene that made it resistant to the antibiotic chloramphenicol. This created the need to use expensive antimicrobial drugs in areas of Africa that were very poor, making treatment difficult. The increased prevalence of iNTS in sub-Saharan Africa compared to other regions is thought to be due to the large proportion of the African population with some degree of immune suppression or impairment due to the burden of HIV, malaria, and malnutrition, especially in children. The genetic makeup of iNTS is evolving into a more typhoid-like bacterium, able to efficiently spread around the human body. Symptoms are reported to be diverse, including fever, hepatosplenomegaly, and respiratory symptoms, often with an absence of gastrointestinal symptoms. Epidemiology Due to being considered sporadic, between 60% and 80% of salmonella infections cases go undiagnosed. In March 2010, data analysis was completed to estimate an incidence rate of 1140 per 100,000 person-years. In the same analysis, 93.8 million cases of gastroenteritis were due to salmonella infections. At the 5th percentile the estimated amount was 61.8 million cases and at the 95th percentile the estimated amount was 131.6 million cases. The estimated number of deaths due to salmonella was approximately 155,000 deaths. In 2014, in countries such as Bulgaria and Portugal, children under 4 were 32 and 82 times more likely, respectively, to have a salmonella infection. Those who are most susceptible to infection are: children, pregnant women, elderly people, and those with deficient immune systems. Risk factors for Salmonella infections include a variety of foods. Meats such as chicken and pork have the possibility to be contaminated. A variety of vegetables and sprouts may also have salmonella. Lastly, a variety of processed foods such as chicken nuggets and pot pies may also contain this bacteria. Successful forms of prevention come from existing entities such as the FDA, United States Department of Agriculture, and the Food Safety and Inspection Service. All of these organizations create standards and inspections to ensure public safety in the U.S. For example, the FSIS agency working with the USDA has a Salmonella Action Plan in place. Recently, it received a two-year plan update in February 2016. Their accomplishments and strategies to reduce Salmonella infection are presented in the plans. The Centers for Disease Control and Prevention also provides valuable information on preventative care, such has how to safely handle raw foods, and the correct way to store these products. In the European Union, the European Food Safety Authority created preventative measures through risk management and risk assessment. From 2005 to 2009, the EFSA placed an approach to reduce exposure to Salmonella. Their approach included risk assessment and risk management of poultry, which resulted in a reduction of infection cases by one half. In Latin America an orally administered vaccine for Salmonella in poultry developed by Dr. Sherry Layton has been introduced which prevents the bacteria from contaminating the birds. A recent Salmonella Typhimurium outbreak has been linked to chocolate produced in Belgium, leading to the country halting Kinder chocolate production. Global monitoring In Germany, food-borne infections must be reported. From 1990 to 2016, the number of officially recorded cases decreased from about 200,000 to about 13,000 cases. In the United States, about 1,200,000 cases of Salmonella infection are estimated to occur each year. A World Health Organization study estimated that 21,650,974 cases of typhoid fever occurred in 2000, 216,510 of which resulted in death, along with 5,412,744 cases of paratyphoid fever. Molecular mechanisms of infection The mechanisms of infection differ between typhoidal and nontyphoidal serotypes, owing to their different targets in the body and the different symptoms that they cause. Both groups must enter by crossing the barrier created by the intestinal cell wall, but once they have passed this barrier, they use different strategies to cause infection. Switch to virulence While travelling to their target tissue in the gastrointestinal tract, Salmonella is exposed to stomach acid, to the detergent-like activity of bile in the intestine, to decreasing oxygen supply, to the competing normal gut flora, and finally to antimicrobial peptides present on the surface of the cells lining the intestinal wall. All of these form stresses that Salmonella can sense and reacts against, and they form virulence factors and as such regulate the switch from their normal growth in the intestine into virulence. The switch to virulence gives access to a replication niche inside the host (such as humans), and can be summarised into several stages: Approach, in which they travel towards a host cell via intestinal peristalsis and through active swimming via the flagella, penetrate the mucus barrier, and locate themselves close to the epithelium lining the intestine, Adhesion, in which they adhere to a host cell using bacterial adhesins and a type III secretion system, Invasion, in which Salmonella enter the host cell (see variant mechanisms below), Replication, in which the bacterium may reproduce inside the host cell, Spread, in which the bacterium can spread to other organs via cells in the blood (if it succeeded in avoiding the immune defence). Alternatively, bacteria can go back towards the intestine, re-seeding the intestinal population. Re-invasion (a secondary infection, if now at a systemic site) and further replication. Mechanisms of entry Nontyphoidal serotypes preferentially enter M cells on the intestinal wall by bacterial-mediated endocytosis, a process associated with intestinal inflammation and diarrhoea. They are also able to disrupt tight junctions between the cells of the intestinal wall, impairing the cells' ability to stop the flow of ions, water, and immune cells into and out of the intestine. The combination of the inflammation caused by bacterial-mediated endocytosis and the disruption of tight junctions is thought to contribute significantly to the induction of diarrhoea. Salmonellae are also able to breach the intestinal barrier via phagocytosis and trafficking by CD18-positive immune cells, which may be a mechanism key to typhoidal Salmonella infection. This is thought to be a more stealthy way of passing the intestinal barrier, and may, therefore, contribute to the fact that lower numbers of typhoidal Salmonella are required for infection than nontyphoidal Salmonella. Salmonella cells are able to enter macrophages via macropinocytosis. Typhoidal serotypes can use this to achieve dissemination throughout the body via the mononuclear phagocyte system, a network of connective tissue that contains immune cells, and surrounds tissue associated with the immune system throughout the body. Much of the success of Salmonella in causing infection is attributed to two type III secretion systems (T3SS) which are expressed at different times during the infection. The T3SS-1 enables the injection of bacterial effectors within the host cytosol. These T3SS-1 effectors stimulate the formation of membrane ruffles allowing the uptake of Salmonella by nonphagocytic cells. Salmonella further resides within a membrane-bound compartment called the Salmonella-Containing Vacuole (SCV). The acidification of the SCV leads to the expression of the T3SS-2. The secretion of T3SS-2 effectors by Salmonella is required for its efficient survival in the host cytosol and establishment of systemic disease. In addition, both T3SS are involved in the colonization of the intestine, induction of intestinal inflammatory responses and diarrhea. These systems contain many genes which must work cooperatively to achieve infection. The AvrA toxin injected by the SPI1 type III secretion system of S. Typhimurium works to inhibit the innate immune system by virtue of its serine/threonine acetyltransferase activity, and requires binding to eukaryotic target cell phytic acid (IP6). This leaves the host more susceptible to infection. Clinical symptoms Salmonellosis is known to be able to cause back pain or spondylosis. It can manifest as five clinical patterns: gastrointestinal tract infection, enteric fever, bacteremia, local infection, and the chronic reservoir state. The initial symptoms are nonspecific fever, weakness, and myalgia among others. In the bacteremia state, it can spread to any parts of the body and this induces localized infection or it forms abscesses. The forms of localized Salmonella infections are arthritis, urinary tract infection, infection of the central nervous system, bone infection, soft tissue infection, etc. Infection may remain as the latent form for a long time, and when the function of reticular endothelial cells is deteriorated, it may become activated and consequently, it may secondarily induce spreading infection in the bone several months or several years after acute salmonellosis. A 2018 Imperial College London study also shows how salmonella disrupt specific arms of the immune system (e.g. 3 of 5 NF-kappaB proteins) using a family of zinc metalloproteinase effectors, leaving others untouched. Salmonella thyroid abscess has also been reported. Resistance to oxidative burst A hallmark of Salmonella pathogenesis is the ability of the bacterium to survive and proliferate within phagocytes. Phagocytes produce DNA-damaging agents such as nitric oxide and oxygen radicals as a defense against pathogens. Thus, Salmonella species must face attack by molecules that challenge genome integrity. Buchmeier et al. showed that mutants of S. enterica lacking RecA or RecBC protein function are highly sensitive to oxidative compounds synthesized by macrophages, and furthermore these findings indicate that successful systemic infection by S. enterica requires RecA- and RecBC-mediated recombinational repair of DNA damage. Host adaptation S. enterica, through some of its serotypes such as Typhimurium and Enteritidis, shows signs that it has the ability to infect several different mammalian host species, while other serotypes, such as Typhi, seem to be restricted to only a few hosts. Two ways that Salmonella serotypes have adapted to their hosts are by the loss of genetic material, and mutation. In more complex mammalian species, immune systems, which include pathogen specific immune responses, target serovars of Salmonella by binding antibodies to structures such as flagella. Thus Salmonella that has lost the genetic material which codes for a flagellum to form can evade a host's immune system. mgtC leader RNA from bacteria virulence gene (mgtCBR operon) decreases flagellin production during infection by directly base pairing with mRNAs of the fljB gene encoding flagellin and promotes degradation. In the study by Kisela et al., more pathogenic serovars of S. enterica were found to have certain adhesins in common that have developed out of convergent evolution. This means that, as these strains of Salmonella have been exposed to similar conditions such as immune systems, similar structures evolved separately to negate these similar, more advanced defenses in hosts. Although many questions remain about how Salmonella has evolved into so many different types, Salmonella may have evolved through several phases. For example, as Baumler et al. have suggested, Salmonella most likely evolved through horizontal gene transfer, and through the formation of new serovars due to additional pathogenicity islands, and through an approximation of its ancestry. So, Salmonella could have evolved into its many different serotypes by gaining genetic information from different pathogenic bacteria. The presence of several pathogenicity islands in the genome of different serotypes has lent credence to this theory. Salmonella sv. Newport shows signs of adaptation to a plant-colonization lifestyle, which may play a role in its disproportionate association with food-borne illness linked to produce. A variety of functions selected for during sv. Newport persistence in tomatoes have been reported to be similar to those selected for in sv. Typhimurium from animal hosts. The papA gene, which is unique to sv. Newport, contributes to the strain's fitness in tomatoes, and has homologs in the genomes of other Enterobacteriaceae that are able to colonize plant and animal hosts. Research In addition to their importance as pathogens, nontyphoidal Salmonella species such as S. enterica serovar Typhimurium are commonly used as homologues of typhoid species. Many findings are transferable and it attenuates the danger for the researcher in case of contamination, but is also limited. For example, it is not possible to study specific typhoidal toxins using this model. However, strong research tools such as the commonly-used mouse intestine gastroenteritis model build upon the use of Salmonella Typhimurium. For genetics, S. Typhimurium has been instrumental in the development of genetic tools that led to an understanding of fundamental bacterial physiology. These developments were enabled by the discovery of the first generalized transducing phage P22 in S. Typhimurium, that allowed quick and easy genetic editing. In turn, this made fine structure genetic analysis possible. The large number of mutants led to a revision of genetic nomenclature for bacteria. Many of the uses of transposons as genetic tools, including transposon delivery, mutagenesis, and construction of chromosome rearrangements, were also developed in S. Typhimurium. These genetic tools also led to a simple test for carcinogens, the Ames test. As a natural alternative to traditional antimicrobials, phages are being recognised as highly effective control agents for Salmonella and other foodborne bacteria. Ancient DNA S. enterica genomes have been reconstructed from up to 6,500 year old human remains across Western Eurasia, which provides evidence for geographic widespread infections with systemic S. enterica during prehistory, and a possible role of the Neolithization process in the evolution of host adaptation. Additional reconstructed genomes from colonial Mexico suggest S. enterica as the cause of cocoliztli, an epidemic in 16th-century New Spain.
Biology and health sciences
Other organisms
null
42116
https://en.wikipedia.org/wiki/Server%20%28computing%29
Server (computing)
A server is a computer that provides information to other computers called "clients" on a computer network. This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers. Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components. History The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in (along with "service"), the paper that introduced Kendall's notation. In earlier papers, such as the , more concrete terms such as "[telephone] operators" are used. In computing, "server" dates at least to RFC 5 (1969), one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of host: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines server in the common sense of a process performing service for requests, usually remote, with the 1981 version reading: The average utilization of a server in the early 2000s was 5 to 15%, but with the adoption of virtualization this figure started to increase to reduce the number of servers needed. Operation Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not. The word service (noun) may refer to the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests". The server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server. While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response. Purpose The role of a server is to share data as well as to share resources and distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of a quid pro quo transaction, or simply a technical possibility. The following table shows several scenarios in which a server is used. Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some implementations of telephony (e.g. pre-Microsoft Skype). Hardware Hardware requirement for servers vary widely, depending on the server's purpose and its software. Servers often are more powerful and expensive than the clients that connect to them. The name server is used both for the hardware and software pieces. For the hardware servers, it is usually limited to mean the high-end machines although software servers can run on a variety of hardwares. Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large servers Large traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime. Uninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power supplies, RAID disk systems, and ECC memory, along with extensive pre-boot memory testing and verification. Critical components might be hot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling. They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks. These types of servers are often housed in dedicated data centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices. Clusters A server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers, and there is a collaborative effort, Open Compute Project around this concept. Appliances A class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers. Mobile A mobile server has a portable form factor, e.g. a laptop. In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time. The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations. To facilitate portability, features such as the keyboard, display, battery (uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis. Operating systems On the Internet, the dominant operating systems among servers are UNIX-like open-source distributions, such as those based on Linux and FreeBSD, with Windows Server also having a significant share. Proprietary operating systems such as z/OS and macOS Server are also deployed, but in much smaller numbers. Servers that run Linux are commonly used as Webservers or Databanks. Windows Servers are used for Networks that are made out of Windows Clients. Specialist server-oriented operating systems have traditionally had features such as: GUI not available or optional Ability to reconfigure and update both hardware and software to some extent without restart Advanced backup facilities to permit regular and frequent online backups of critical data, Transparent data transfer between different volumes or devices Flexible and advanced networking capabilities Automation capabilities such as daemons in UNIX and services in Windows Tight system security, with advanced user, resource, data, and memory protection. Advanced detection and alerting on conditions such as overheating, processor and disk failure. In practice, today many desktop and server operating systems share similar code bases, differing mostly in configuration. Energy consumption In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1–1.5% of electrical energy consumption worldwide and 1.7–2.2% in the United States. One estimate is that total energy consumption for information and communications technology saves more than 5 times its carbon footprint in the rest of the economy by increasing efficiency. Global energy consumption is increasing due to the increasing demand of data and bandwidth. Natural Resources Defense Council (NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage. Environmental groups have placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons of carbon dioxide in a year.
Technology
Networks
null
42139
https://en.wikipedia.org/wiki/Garden
Garden
A garden is a planned space, usually outdoors, set aside for the cultivation, display, and enjoyment of plants and other forms of nature. The single feature identifying even the wildest wild garden is control. The garden can incorporate both natural and artificial materials. Gardens often have design features including statuary, follies, pergolas, trellises, stumperies, dry creek beds, and water features such as fountains, ponds (with or without fish), waterfalls or creeks. Some gardens are for ornamental purposes only, while others also produce food crops, sometimes in separate areas, or sometimes intermixed with the ornamental plants. Food-producing gardens are distinguished from farms by their smaller scale, more labor-intensive methods, and their purpose (enjoyment of a hobby or self-sustenance rather than producing for sale, as in a market garden). Flower gardens combine plants of different heights, colors, textures, and fragrances to create interest and delight the senses. The most common form today is a residential or public garden, but the term garden has traditionally been a more general one. Zoos, which display wild animals in simulated natural habitats, were formerly called zoological gardens. Western gardens are almost universally based on plants, with garden, which etymologically implies enclosure, often signifying a shortened form of botanical garden. Some traditional types of eastern gardens, such as Zen gardens, however, use plants sparsely or not at all. Landscape gardens, on the other hand, such as the English landscape gardens first developed in the 18th century, may omit flowers altogether. Landscape architecture is a related professional activity with landscape architects tending to engage in design at many scales and working on both public and private projects. Etymology The etymology of the word gardening refers to enclosure: it is from Middle English gardin, from Anglo-French gardin, jardin, of Germanic origin; akin to Old High German gard, gart, an enclosure or compound, as in Stuttgart. See Grad (Slavic settlement) for more complete etymology. The words yard, court, and Latin hortus (meaning "garden", hence horticulture and orchard), are cognates—all referring to an enclosed space. The term "garden" in British English refers to a small enclosed area of land, usually adjoining a building. This would be referred to as a yard in American English. Uses A garden can have aesthetic, functional, and recreational uses: Cooperation with nature Plant cultivation Garden-based learning Observation of nature Bird- and insect-watching Reflection on the changing seasons Relaxation Placing down different types of garden gnomes Family dinners on the terrace Children playing in the garden Reading and relaxing in a hammock Maintaining the flowerbeds Pottering in the shed Basking in warm sunshine Escaping oppressive sunlight and heat Growing useful produce Flowers to cut and bring inside for indoor beauty Fresh herbs and vegetables for cooking History Asia China The earliest recorded Chinese gardens were created in the valley of the Yellow River, during the Shang dynasty (1600–1046 BC). These gardens were large enclosed parks where the kings and nobles hunted game, or where fruit and vegetables were grown. Early inscriptions from this period, carved on tortoise shells, have three Chinese characters for garden, you, pu and yuan. You was a royal garden where birds and animals were kept, while pu was a garden for plants. During the Qin dynasty (221–206 BC), yuan became the character for all gardens. The old character for yuan is a small picture of a garden; it is enclosed in a square which can represent a wall, and has symbols which can represent the plan of a structure, a small square which can represent a pond, and a symbol for a plantation or a pomegranate tree. A famous royal garden of the late Shang dynasty was the Terrace, Pond and Park of the Spirit (Lingtai, Lingzhao Lingyou) built by King Wenwang west of his capital city, Yin. The park was described in the Classic of Poetry this way: The King makes his promenade in the Park of the Spirit, The deer are kneeling on the grass, feeding their fawns, The deer are beautiful and resplendent. The immaculate cranes have plumes of a brilliant white. The King makes his promenade to the Pond of the Spirit, The water is full of fish, who wriggle. Another early royal garden was Shaqui, or the Dunes of Sand, built by the last Shang ruler, King Zhou (1075–1046 BC). It was composed of an earth terrace, or tai, which served as an observation platform in the center of a large square park. It was described in one of the early classics of Chinese literature, the Records of the Grand Historian (Shiji). According to the Shiji, one of the most famous features of this garden was the Wine Pool and Meat Forest (酒池肉林). A large pool, big enough for several small boats, was constructed on the palace grounds, with inner linings of polished oval shaped stones from the seashore. The pool was then filled with wine. A small island was constructed in the middle of the pool, where trees were planted, which had skewers of roasted meat hanging from their branches. King Zhou and his friends and concubines drifted in their boats, drinking the wine with their hands and eating the roasted meat from the trees. Later Chinese philosophers and historians cited this garden as an example of decadence and bad taste. During the Spring and Autumn period (722–481 BC), in 535 BC, the Terrace of Shanghua, with lavishly decorated palaces, was built by King Jing of the Zhou dynasty. In 505 BC, an even more elaborate garden, the Terrace of Gusu, was begun. It was located on the side of a mountain, and included a series of terraces connected by galleries, along with a lake where boats in the form of blue dragons navigated. From the highest terrace, a view extended as far as Lake Tai, the Great Lake. India Manasollasa is a twelfth century Sanskrit text that offers details on garden design and a variety of other subjects. Both public parks and woodland gardens are described, with about 40 types of trees recommended for the park in the Vana-krida chapter. Shilparatna, a text from the sixteenth century, states that flower gardens or public parks should be located in the northern portion of a town. Japan The earliest recorded Japanese gardens were the pleasure gardens of the Emperors and nobles. They were mentioned in several brief passages of the , the first chronicle of Japanese history, published in 720 CE. In spring 74 CE, the chronicle recorded: "The Emperor Keikō put a few carp into a pond, and rejoiced to see them morning and evening". The following year, "The Emperor launched a double-hulled boat in the pond of Ijishi at Ihare, and went aboard with his imperial concubine, and they feasted sumptuously together". In 486, the chronicle recorded that "The Emperor Kenzō went into the garden and feasted at the edge of a winding stream". Korea Korean gardens are a type of garden described as being natural, informal, simple and unforced, seeking to merge with the natural world. They have a history that goes back more than two thousand years, but are little known in the west. The oldest records date to the Three Kingdoms period (57 BC – 668 AD) when architecture and palace gardens showed a development noted in the Korean History of the Three Kingdoms. Europe Gardening was not recognized as an art form in Europe until the mid 16th century when it entered the political discourse, as a symbol of the concept of the "ideal republic". Evoking utopian imagery of the Garden of Eden, a time of abundance and plenty where humans didn't know hunger or the conflicts that arose from property disputes. John Evelyn wrote in the early 17th century, "there is not a more laborious life then is that of a good Gard'ners; but a labour full of tranquility and satisfaction; Natural and Instructive, and such as (if any) contributes to Piety and Contemplation." During the era of Enclosures, the agrarian collectivism of the feudal age was idealized in literary "fantasies of liberating regression to garden and wilderness". France Following his campaign in Italy in 1495, where he saw the gardens and castles of Naples, King Charles VIII brought Italian craftsmen and garden designers, such as Pacello da Mercogliano, from Naples and ordered the construction of Italian-style gardens at his residence at the Château d'Amboise and at Château Gaillard, another private résidence in Amboise. His successor Henry II, who had also travelled to Italy and had met Leonardo da Vinci, created an Italian garden nearby at the Château de Blois. Beginning in 1528, King Francis I created new gardens at the Château de Fontainebleau, which featured fountains, parterres, a forest of pine trees brought from Provence, and the first artificial grotto in France. The Château de Chenonceau had two gardens in the new style, one created for Diane de Poitiers in 1551, and a second for Catherine de' Medici in 1560. In 1536, the architect Philibert de l'Orme, upon his return from Rome, created the gardens of the Château d'Anet following the Italian rules of proportion. The carefully prepared harmony of Anet, with its parterres and surfaces of water integrated with sections of greenery, became one of the earliest and most influential examples of the classic French garden. The French formal garden () contrasted with the design principles of the English landscape garden () namely, to "force nature" instead of leaving it undisturbed. Typical French formal gardens had "parterres, geometrical shapes and neatly clipped topiary", in contrast to the English style of garden in which "plants and shrubs seem to grow naturally without artifice." By the mid-17th century axial symmetry had ascended to prominence in the French gardening traditions of Andre Mollet and Jacques Boyceau, from which the latter wrote: "All things, however beautiful they may be chosen, will be defective if they are not ordered and placed in proper symmetry." A good example of the French formal style are the Tuileries gardens in Paris which were originally designed during the reign of King Henry II in the mid-sixteenth century. The gardens were redesigned into the formal French style for the Sun King Louis XIV. The gardens were ordered into symmetrical lines: long rows of elm or chestnut trees, clipped hedgerows, along with parterres, "reflect[ing] the orderly triumph of man's will over nature." The French landscape garden was influenced by the English landscape garden and gained prominence in the late eighteenth century. United Kingdom Before the Grand Manner era, a few significant gardens were found in Britain which were developed under the influence of the continent. Britain's homegrown domestic gardening traditions were mostly practical in purpose, rather than aesthetic, unlike the grand gardens found mostly on castle grounds, and less commonly in universities. Tudor Gardens emphasized contrast rather than transitions, distinguished by color and illusion. They were not intended as a complement to home or architecture, but conceived as independent spaces, arranged to grow and display flowers and ornamental plants. Gardeners demonstrated their artistry in knot gardens, with complex arrangements most commonly included interwoven box hedges, and less commonly fragrant herbs like rosemary. Sanded paths run between the hedgings of open knots whereas closed knots were filled with single colored flowers. The knot and parterre gardens were always placed on level ground, and elevated areas reserved for terraces from which the intricacy of the gardens could be viewed. Jacobean gardens were described as "a delightful confusion" by Henry Wotton in 1624. Under the influence of the Italian Renaissance, Caroline gardens began to shed some of the chaos of earlier designs, marking the beginning of a trends towards symmetrical unified designs that took the building architecture into account, and featuring an elevated terrace from which home and garden could be viewed. The only surviving Caroline garden is located at Bolsover Castle in Derbyshire, but is too simple to attract much interest. During the reign of Charles II, many new Baroque style country houses were built; while in England Oliver Cromwell sought to destroy many Tudor, Jacobean and Caroline style gardens. Design Garden design is the process of creating plans for the layout and planting of gardens and landscapes. Gardens may be designed by garden owners themselves, or by professionals. Professional garden designers tend to be trained in principles of design and horticulture, and have a knowledge and experience of using plants. Some professional garden designers are also landscape architects, a more formal level of training that usually requires an advanced degree and often an occupational license. Elements of garden design include the layout of hard landscape, such as paths, rockeries, walls, water features, sitting areas and decking, as well as the plants themselves, with consideration for their horticultural requirements, their season-to-season appearance, lifespan, growth habit, size, speed of growth, and combinations with other plants and landscape features. Most gardens consist of a mixture of natural and constructed elements, although even very 'natural' gardens are always an inherently artificial creation. Natural elements present in a garden principally comprise flora (such as trees and weeds), fauna (such as arthropods and birds), soil, water, air and light. Constructed elements include not only paths, patios, decking, sculptures, drainage systems, lights and buildings (such as sheds, gazebos, pergolas and follies), but also living constructions such as flower beds, ponds and lawns. Garden needs of maintenance are also taken into consideration. Including the time or funds available for regular maintenance, (this can affect the choices of plants regarding speed of growth) spreading or self-seeding of the plants (annual or perennial), bloom-time, and many other characteristics. Garden design can be roughly divided into two groups, formal and naturalistic gardens. The most important consideration in any garden design is how the garden will be used, followed closely by the desired stylistic genres, and the way the garden space will connect to the home or other structures in the surrounding areas. All of these considerations are subject to the budget limitations. Budget limitations can be addressed by a simpler garden style with fewer plants and less costly hard landscape materials, seeds rather than sod for lawns, and plants that grow quickly; alternatively, garden owners may choose to create their garden over time, area by area. Types Environmental impact Gardeners may cause environmental damage by the way they garden, or they may enhance their local environment. Damage by gardeners can include direct destruction of natural habitats when houses and gardens are created; indirect habitat destruction and damage to provide garden materials such as peat, rock for rock gardens, and by the use of tapwater to irrigate gardens; the death of living beings in the garden itself, such as the killing not only of slugs and snails but also their predators such as hedgehogs and song thrushes by metaldehyde slug killer; the death of living beings outside the garden, such as local species extinction by indiscriminate plant collectors; and climate change caused by greenhouse gases produced by gardening. Climate change Gardeners can help to prevent climate change in many ways, including the use of trees, shrubs, ground cover plants and other perennial plants in their gardens, turning garden waste into soil organic matter instead of burning it, keeping soil and compost heaps aerated, avoiding peat, switching from power tools to hand tools or changing their garden design so that power tools are not needed, and using nitrogen-fixing plants instead of nitrogen fertiliser. Climate change will have many impacts on gardens; some studies suggest most of them will be negative. Gardens also contribute to climate change. Greenhouse gases can be produced by gardeners in many ways. The three main greenhouse gases are carbon dioxide, methane, and nitrous oxide. Gardeners produce carbon dioxide directly by overcultivating soil and destroying soil carbon, by burning garden waste on bonfires, by using power tools which burn fossil fuel or use electricity generated by fossil fuels, and by using peat. Gardeners produce methane by compacting the soil and making it anaerobic, and by allowing their compost heaps to become compacted and anaerobic. Gardeners produce nitrous oxide by applying excess nitrogen fertiliser when plants are not actively growing so that the nitrogen in the fertiliser is converted by soil bacteria to nitrous oxide. Irrigation Some gardeners manage their gardens without using any water from outside the garden. Examples in Britain include Ventnor Botanic Garden on the Isle of Wight, and parts of Beth Chatto's garden in Essex, Sticky Wicket garden in Dorset, and the Royal Horticultural Society's gardens at Harlow Carr and Hyde Hall. Rain gardens absorb rainfall falling onto nearby hard surfaces, rather than sending it into stormwater drains.
Technology
Food and health
null
42168
https://en.wikipedia.org/wiki/Data%20communication
Data communication
Data communication, including data transmission and data reception, is the transfer of data, transmitted and received over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal. Analog transmission is a method of conveying voice, data, image, signal or video information using a continuous signal that varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation is carried out by modem equipment. Digital communications, including digital transmission and digital reception, is the transfer of either a digitized analog signal or a born-digital bitstream. According to the most common definition, both baseband and passband bit-stream components are considered part of a digital signal; an alternative definition considers only the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion. Distinction between related subjects Courses and textbooks in the field of data transmission as well as digital transmission and digital communications have similar content. Digital transmission or data transmission traditionally belongs to telecommunications and electrical engineering. Basic principles of data transmission may also be covered within the computer science or computer engineering topic of data communications, which also includes computer networking applications and communication protocols, for example routing, switching and inter-process communication. Although the Transmission Control Protocol (TCP) involves transmission, TCP and other transport layer protocols are covered in computer networking but not discussed in a textbook or course about data transmission. In most textbooks, the term analog transmission only refers to the transmission of an analog message signal (without digitization) by means of an analog signal, either as a non-modulated baseband signal or as a passband signal using an analog modulation method such as AM or FM. It may also include analog-over-analog pulse modulated baseband signals such as pulse-width modulation. In a few books within the computer networking tradition, analog transmission also refers to passband transmission of bit-streams using digital modulation methods such as FSK, PSK and ASK. The theoretical aspects of data transmission are covered by information theory and coding theory. Protocol layers and sub-topics Courses and textbooks in the field of data transmission typically deal with the following OSI model protocol layers and topics: Layer 1, the physical layer: Channel coding including Digital modulation schemes Line coding schemes Forward error correction (FEC) codes Bit synchronization Multiplexing Equalization Channel models Layer 2, the data link layer: Channel access schemes, media access control (MAC) Packet mode communication and Frame synchronization Error detection and automatic repeat request (ARQ) Flow control Layer 6, the presentation layer: Source coding (digitization and data compression), and information theory. Cryptography (may occur at any layer) It is also common to deal with the cross-layer design of those three layers. Applications and history Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical, acoustic, mechanical) means since the advent of communication. Analog signal data has been sent electronically since the advent of the telephone. However, the first data electromagnetic transmission applications in modern time were electrical telegraphy (1809) and teletypewriters (1906), which are both digital signals. The fundamental theoretical work in data transmission and information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the early 20th century, was done with these applications in mind. In the early 1960s, Paul Baran invented distributed adaptive message block switching for digital communication of voice messages using switches that were low-cost electronics. Donald Davies invented and implemented modern data communication during 1965-7, including packet switching, high-speed routers, communication protocols, hierarchical computer networks and the essence of the end-to-end principle. Baran's work did not include routers with software switches and communication protocols, nor the idea that users, rather than the network itself, would provide the reliability. Both were seminal contributions that influenced the development of computer networks. Data transmission is utilized in computers in computer buses and for communication with peripheral equipment via parallel ports and serial ports such as RS-232 (1969), FireWire (1995) and USB (1996). The principles of data transmission are also utilized in storage media for error detection and correction since 1951. The first practical method to overcome the problem of receiving data accurately by the receiver using digital code was the Barker code invented by Ronald Hugh Barker in 1952 and published in 1953. Data transmission is utilized in computer networking equipment such as modems (1940), local area network (LAN) adapters (1964), repeaters, repeater hubs, microwave links, wireless network access points (1997), etc. In telephone networks, digital communication is utilized for transferring many phone calls over the same copper cable or fiber cable by means of pulse-code modulation (PCM) in combination with time-division multiplexing (TDM) (1962). Telephone exchanges have become digital and software controlled, facilitating many value-added services. For example, the first AXE telephone exchange was presented in 1976. Digital communication to the end user using Integrated Services Digital Network (ISDN) services became available in the late 1980s. Since the end of the 1990s, broadband access techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-home (FTTH) have become widespread to small offices and homes. The current tendency is to replace traditional telecommunication services with packet mode communication such as IP telephony and IPTV. Transmitting analog signals digitally allows for greater signal processing capability. The ability to process a communications signal means that errors caused by random processes can be detected and corrected. Digital signals can also be sampled instead of continuously monitored. The multiplexing of multiple digital signals is much simpler compared to the multiplexing of analog signals. Because of all these advantages, because of the vast demand to transmit computer data and the ability of digital communications to do so and because recent advances in wideband communication channels and solid-state electronics have allowed engineers to realize these advantages fully, digital communications have grown quickly. The digital revolution has also resulted in many digital telecommunication applications where the principles of data transmission are applied. Examples include second-generation (1991) and later cellular telephony, video conferencing, digital TV (1998), digital radio (1999), and telemetry. Data transmission, digital transmission or digital communications is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels include copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared light. While analog transmission is the transfer of a continuously varying analog signal over an analog channel, digital communication is the transfer of discrete messages over a digital or an analog channel. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying wave forms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation (also known as detection) is carried out by modem equipment. According to the most common definition of a digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion. Data transmitted may be digital messages originating from a data source, for example a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-digital conversion and data compression) schemes. This source coding and decoding is carried out by codec equipment. Serial and parallel transmission In telecommunications, serial transmission is the sequential transmission of signal elements of a group representing a character or other entity of data. Digital serial transmissions are bits sent over a single wire, frequency or optical path sequentially. Because it requires less signal processing and less chances for error than parallel transmission, the transfer rate of each individual path may be faster. This can be used over longer distances and a check digit or parity bit can be sent along with the data easily. Parallel transmission is the simultaneous transmission of related signal elements over two or more separate paths. Multiple electrical wires are used that can transmit multiple bits simultaneously, which allows for higher data transfer rates than can be achieved with serial transmission. This method is typically used internally within the computer, for example, the internal buses, and sometimes externally for such things as printers. Timing skew can be a significant issue in these systems because the wires in parallel data transmission unavoidably have slightly different properties so some bits may arrive before others, which may corrupt the message. This issue tends to worsen with distance making parallel data transmission less reliable for long distances. Communication channels Some communications channel types include: Data transmission circuit Full-duplex Half-duplex Simplex Multi-drop: Bus network Mesh network Ring network Star network Wireless network Point-to-point Asynchronous and synchronous data transmission Asynchronous serial communication uses start and stop bits to signify the beginning and end of transmission. This method of transmission is used when data are sent intermittently as opposed to in a solid stream. Synchronous transmission synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals. The clock may be a separate signal or embedded in the data. A continual stream of data is then sent between the two nodes. Due to there being no start and stop bits, the data transfer rate may be more efficient.
Technology
Basics_3
null
42253
https://en.wikipedia.org/wiki/Data%20mining
Data mining
Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. The term "data mining" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. Often the more general terms (large scale) data analysis and analytics—or, when referring to actual methods, artificial intelligence and machine learning—are more appropriate. The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps. The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data. The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations. Etymology In the 1960s, statisticians and economists used terms like data fishing or data dredging to refer to what they considered the bad practice of analyzing data without an a-priori hypothesis. The term "data mining" was used in a similarly critical way by economist Michael Lovell in an article published in the Review of Economic Studies in 1983. Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative). The term data mining appeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, the phrase "database mining"™, was used, but since it was trademarked by HNC, a San Diego–based company, to pitch their Database Mining Workstation; researchers consequently turned to data mining. Other terms used include data archaeology, information harvesting, information discovery, knowledge extraction, etc. Gregory Piatetsky-Shapiro coined the term "knowledge discovery in databases" for the first workshop on the same topic (KDD-1989) and this term became more popular in the AI and machine learning communities. However, the term data mining became more popular in the business and press communities. Currently, the terms data mining and knowledge discovery are used interchangeably. Background The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As data sets have grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets. Process The knowledge discovery in databases (KDD) process is commonly defined with the stages: Selection Pre-processing Transformation Data mining Interpretation/evaluation. It exists, however, in many variations on this theme, such as the Cross-industry standard process for data mining (CRISP-DM) which defines six phases: Business understanding Data understanding Data preparation Modeling Evaluation Deployment or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation. Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners. The only other data mining standard named in these polls was SEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models, and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008. Pre-processing Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is a data mart or data warehouse. Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data. Data mining Data mining involves six common classes of tasks: Anomaly detection (outlier/change/deviation detection) – The identification of unusual data records, that might be interesting or data errors that require further investigation due to being out of standard range. Association rule learning (dependency modeling) – Searches for relationships between variables. For example, a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis. Clustering – is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data. Classification – is the task of generalizing known structure to apply to new data. For example, an e-mail program might attempt to classify an e-mail as "legitimate" or as "spam". Regression – attempts to find a function that models the data with the least error that is, for estimating the relationships among data or datasets. Summarization – providing a more compact representation of the data set, including visualization and report generation. Results validation Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot be reproduced on a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing proper statistical hypothesis testing. A simple version of this problem in machine learning is known as overfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening. The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is called overfitting. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on a training set of sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it had not been trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such as ROC curves. If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge. Research The premier professional body in the field is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD). Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings, and since 1999 it has published a biannual academic journal titled "SIGKDD Explorations". Computer science conferences on data mining include: CIKM Conference – ACM Conference on Information and Knowledge Management European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases KDD Conference – ACM SIGKDD Conference on Knowledge Discovery and Data Mining Data mining topics are also present in many data management/database conferences such as the ICDE Conference, SIGMOD Conference and International Conference on Very Large Data Bases. Standards There have been some efforts to define standards for the data mining process, for example, the 1999 European Cross Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining standard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft. For exchanging the extracted models—in particular for use in predictive analytics—the key standard is the Predictive Model Markup Language (PMML), which is an XML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example) subspace clustering have been proposed independently of the DMG. Notable uses Data mining is used wherever there is digital data available. Notable examples of data mining can be found throughout business, medicine, science, finance, construction, and surveillance. Privacy concerns and ethics While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation to user behavior (ethical and otherwise). The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns. Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining per se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous. It is recommended to be aware of the following before data are collected: The purpose of the data collection and any (known) data mining projects. How the data will be used. Who will be able to mine the data and use the data and their derivatives. The status of security surrounding access to the data. How collected data can be updated. Data may also be modified so as to become anonymous, so that individuals may not readily be identified. However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL. The inadvertent revelation of personally identifiable information leading to the provider violates Fair Information Practices. This indiscretion can cause financial, emotional, or bodily harm to the indicated individual. In one instance of privacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling prescription information to data mining companies who in turn provided the data to pharmaceutical companies. Situation in Europe Europe has rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, the U.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence of Edward Snowden's global surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to the National Security Agency, and attempts to reach an agreement with the United States have failed. In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places. Situation in the United States In the United States, privacy concerns have been addressed by the US Congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act (HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article in Biotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals." This underscores the necessity for data anonymity in data aggregation and mining practices. U.S. information privacy legislation such as HIPAA and the Family Educational Rights and Privacy Act (FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation. Copyright law Situation in Europe Under European copyright database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist, so data mining becomes subject to intellectual property owners' rights that are protected by the Database Directive. On the recommendation of the Hargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as a limitation and exception. The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions. Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force on 1 April 2020. The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe. The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013. Situation in the United States US copyright law, and in particular its provision for fair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining. Software Free open-source data mining software and applications The following applications are available under free/open-source licenses. Public access to application source code is also available. Carrot2: Text and search results clustering framework. Chemicalize.org: A chemical structure miner and web search engine. ELKI: A university research project with advanced cluster analysis and outlier detection methods written in the Java language. GATE: a natural language processing and language engineering tool. KNIME: The Konstanz Information Miner, a user-friendly and comprehensive data analytics framework. Massive Online Analysis (MOA): a real-time big data stream mining with concept drift tool in the Java programming language. MEPX: cross-platform tool for regression and classification problems based on a Genetic Programming variant. mlpack: a collection of ready-to-use machine learning algorithms written in the C++ language. NLTK (Natural Language Toolkit): A suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python language. OpenNN: Open neural networks library. Orange: A component-based data mining and machine learning software suite written in the Python language. PSPP: Data mining and statistics software under the GNU Project similar to SPSS R: A programming language and software environment for statistical computing, data mining, and graphics. It is part of the GNU Project. scikit-learn: An open-source machine learning library for the Python programming language; Torch: An open-source deep learning library for the Lua programming language and scientific computing framework with wide support for machine learning algorithms. UIMA: The UIMA (Unstructured Information Management Architecture) is a component framework for analyzing unstructured content such as text, audio and video – originally developed by IBM. Weka: A suite of machine learning software applications written in the Java programming language. Proprietary data-mining software and applications The following applications are available under proprietary licenses. Angoss KnowledgeSTUDIO: data mining tool LIONsolver: an integrated software application for data mining, business intelligence, and modeling that implements the Learning and Intelligent OptimizatioN (LION) approach. PolyAnalyst: data and text mining software by Megaputer Intelligence. Microsoft Analysis Services: data mining software provided by Microsoft. NetOwl: suite of multilingual text and entity analytics products that enable data mining. Oracle Data Mining: data mining software by Oracle Corporation. PSeven: platform for automation of engineering simulation and analysis, multidisciplinary optimization and data mining provided by DATADVANCE. Qlucore Omics Explorer: data mining software. RapidMiner: An environment for machine learning and data mining experiments. SAS Enterprise Miner: data mining software provided by the SAS Institute. SPSS Modeler: data mining software provided by IBM. STATISTICA Data Miner: data mining software provided by StatSoft. Tanagra: Visualisation-oriented data mining software, also for teaching. Vertica: data mining software provided by Hewlett-Packard. Google Cloud Platform: automated custom ML models managed by Google. Amazon SageMaker: managed service provided by Amazon for creating & productionising custom ML models.
Technology
Computer software
null
42261
https://en.wikipedia.org/wiki/Irrigation
Irrigation
Irrigation (also referred to as watering of plants) is the practice of applying controlled amounts of water to land to help grow crops, landscape plants, and lawns. Irrigation has been a key aspect of agriculture for over 5,000 years and has been developed by many cultures around the world. Irrigation helps to grow crops, maintain landscapes, and revegetate disturbed soils in dry areas and during times of below-average rainfall. In addition to these uses, irrigation is also employed to protect crops from frost, suppress weed growth in grain fields, and prevent soil consolidation. It is also used to cool livestock, reduce dust, dispose of sewage, and support mining operations. Drainage, which involves the removal of surface and sub-surface water from a given location, is often studied in conjunction with irrigation. There are several methods of irrigation that differ in how water is supplied to plants. Surface irrigation, also known as gravity irrigation, is the oldest form of irrigation and has been in use for thousands of years. In sprinkler irrigation, water is piped to one or more central locations within the field and distributed by overhead high-pressure water devices. Micro-irrigation is a system that distributes water under low pressure through a piped network and applies it as a small discharge to each plant. Micro-irrigation uses less pressure and water flow than sprinkler irrigation. Drip irrigation delivers water directly to the root zone of plants. Subirrigation has been used in field crops in areas with high water tables for many years. It involves artificially raising the water table to moisten the soil below the root zone of plants. Irrigation water can come from groundwater (extracted from springs or by using wells), from surface water (withdrawn from rivers, lakes or reservoirs) or from non-conventional sources like treated wastewater, desalinated water, drainage water, or fog collection. Irrigation can be supplementary to rainfall, which is common in many parts of the world as rainfed agriculture, or it can be full irrigation, where crops rarely rely on any contribution from rainfall. Full irrigation is less common and only occurs in arid landscapes with very low rainfall or when crops are grown in semi-arid areas outside of rainy seasons. The environmental effects of irrigation relate to the changes in quantity and quality of soil and water as a result of irrigation and the subsequent effects on natural and social conditions in river basins and downstream of an irrigation scheme. The effects stem from the altered hydrological conditions caused by the installation and operation of the irrigation scheme. Amongst some of these problems is depletion of underground aquifers through overdrafting. Soil can be over-irrigated due to poor distribution uniformity or management wastes water, chemicals, and may lead to water pollution. Over-irrigation can cause deep drainage from rising water tables that can lead to problems of irrigation salinity requiring watertable control by some form of subsurface land drainage. Extent In 2000, the total fertile land was 2,788,000 km2 (689 million acres) and it was equipped with irrigation infrastructure worldwide. About 68% of this area is in Asia, 17% in the Americas, 9% in Europe, 5% in Africa and 1% in Oceania. The largest contiguous areas of high irrigation density are found in Northern and Eastern India and Pakistan along the Ganges and Indus rivers; in the Hai He, Huang He and Yangtze basins in China; along the Nile river in Egypt and Sudan; and in the Mississippi-Missouri river basin, the Southern Great Plains, and in parts of California in the United States. Smaller irrigation areas are spread across almost all populated parts of the world. By 2012, the area of irrigated land had increased to an estimated total of 3,242,917 km2 (801 million acres), which is nearly the size of India. The irrigation of 20% of farming land accounts for the production of 40% of food production. Global overview The scale of irrigation increased dramatically over the 20th century. In 1800, 8 million hectares globally were irrigated, in 1950, 94 million hectares, and in 1990, 235 million hectares. By 1990, 30% of the global food production came from irrigated land. Irrigation techniques across the globe includes canals redirecting surface water, groundwater pumping, and diverting water from dams. National governments lead most irrigation schemes within their borders, but private investors and other nations, especially the United States, China, and European countries like the United Kingdom, also fund and organize some schemes within other nations. By 2021 the global land area equipped for irrigation reached 352 million ha, an increase of 22% from the 289 million ha of 2000 and more than twice the 1960s land area equipped for irrigation. The vast majority is located in Asia (70%), where irrigation was a key component of the green revolution; the Americas account for 16% and Europe for 8% of the world total. India (76 million ha) and China (75 million ha) have the largest equipped area for irrigation, far ahead of the United States o fAmerica (27 million ha). China and India also have the largest net gains in equipped area between 2000 and 2020 (+21 million ha for China and +15 million ha for India). All the regions saw increases in the area equipped for irrigation, with Africa growing the fastest (+29%), followed by Asia (+25%), Oceania (+24%), the Americas (+19%) and Europe (+2%). Irrigation enables the production of more crops, especially commodity crops in areas which otherwise could not support them. Countries frequently invested in irrigation to increase wheat, rice, or cotton production, often with the overarching goal of increasing self-sufficiency. Example values for crops Water sources Groundwater and surface water Irrigation water can come from groundwater (extracted from springs or by using wells), from surface water (withdrawn from rivers, lakes or reservoirs) or from non-conventional sources like treated wastewater, desalinated water, drainage water, or fog collection. While floodwater harvesting belongs to the accepted irrigation methods, rainwater harvesting is usually not considered as a form of irrigation. Rainwater harvesting is the collection of runoff water from roofs or unused land and the concentration of this. Treated or untreated wastewater Other sources Irrigation water can also come from non-conventional sources like treated wastewater, desalinated water, drainage water, or fog collection. In countries where humid air sweeps through at night, water can be obtained by condensation onto cold surfaces. This is practiced in the vineyards at Lanzarote using stones to condense water. Fog collectors are also made of canvas or foil sheets. Using condensate from air conditioning units as a water source is also becoming more popular in large urban areas. a Glasgow-based startup has helped a farmer in Scotland to establish edible saltmarsh crops irrigated with sea water. An acre of previously marginal land has been put under cultivation to grow samphire, sea blite, and sea aster; these plants yield a higher profit than potatoes. The land is flood irrigated twice a day to simulate tidal flooding; the water is pumped from the sea using wind power. Additional benefits are soil remediation and carbon sequestration. Competition for water resources Until the 1960s, there were fewer than half the number of people on the planet as of 2024. People were not as wealthy as today, consumed fewer calories and ate less meat, so less water was needed to produce their food. They required a third of the volume of water humans presently take from rivers. Today, the competition for water resources is much more intense, because there are now more than seven billion people on the planet, increasing the likelihood of overconsumption of food produced by water-thirsty animal agriculture and intensive farming practices. This creates increasing competition for water from industry, urbanisation and biofuel crops. Farmers will have to strive to increase productivity to meet growing demands for food, while industry and cities find ways to use water more efficiently. Successful agriculture is dependent upon farmers having sufficient access to water. However, water scarcity is already a critical constraint to farming in many parts of the world. Irrigation methods There are several methods of irrigation. They vary in how the water is supplied to the plants. The goal is to apply the water to the plants as uniformly as possible, so that each plant has the amount of water it needs, neither too much nor too little. Irrigation can also be understood whether it is supplementary to rainfall as happens in many parts of the world, or whether it is 'full irrigation' whereby crops rarely depend on any contribution from rainfall. Full irrigation is less common and only happens in arid landscapes experiencing very low rainfall or when crops are grown in semi-arid areas outside of any rainy seasons. Surface irrigation Surface irrigation, also known as gravity irrigation, is the oldest form of irrigation and has been in use for thousands of years. In surface (furrow, flood, or level basin) irrigation systems, water moves across the surface of agricultural lands, in order to wet it and infiltrate into the soil. Water moves by following gravity or the slope of the land. Surface irrigation can be subdivided into furrow, border strip or basin irrigation. It is often called flood irrigation when the irrigation results in flooding or near flooding of the cultivated land. Historically, surface irrigation is the most common method of irrigating agricultural land across most parts of the world. The water application efficiency of surface irrigation is typically lower than other forms of irrigation, due in part to the lack of control of applied depths. Surface irrigation involves a significantly lower capital cost and energy requirement than pressurised irrigation systems. Hence it is often the irrigation choice for developing nations, for low value crops and for large fields. Where water levels from the irrigation source permit, the levels are controlled by dikes (levees), usually plugged by soil. This is often seen in terraced rice fields (rice paddies), where the method is used to flood or control the level of water in each distinct field. In some cases, the water is pumped, or lifted by human or animal power to the level of the land. Surface irrigation is even used to water urban gardens in certain areas, for example, in and around Phoenix, Arizona. The irrigated area is surrounded by a berm and the water is delivered according to a schedule set by a local irrigation district. A special form of irrigation using surface water is spate irrigation, also called floodwater harvesting. In case of a flood (spate), water is diverted to normally dry river beds (wadis) using a network of dams, gates and channels and spread over large areas. The moisture stored in the soil will be used thereafter to grow crops. Spate irrigation areas are in particular located in semi-arid or arid, mountainous regions. Micro-irrigation Micro-irrigation, sometimes called localized irrigation, low volume irrigation, or trickle irrigation is a system where water is distributed under low pressure through a piped network, in a pre-determined pattern, and applied as a small discharge to each plant or adjacent to it. Traditional drip irrigation use individual emitters, subsurface drip irrigation (SDI), micro-spray or micro-sprinklers, and mini-bubbler irrigation all belong to this category of irrigation methods. Drip irrigation Drip irrigation, also known as microirrigation or trickle irrigation, functions as its name suggests. In this system, water is delivered at or near the root zone of plants, one drop at a time. This method can be the most water-efficient method of irrigation, if managed properly; evaporation and runoff are minimized. The field water efficiency of drip irrigation is typically in the range of 80 to 90% when managed correctly. In modern agriculture, drip irrigation is often combined with plastic mulch, further reducing evaporation, and is also the means of delivery of fertilizer. The process is known as fertigation. Deep percolation, where water moves below the root zone, can occur if a drip system is operated for too long or if the delivery rate is too high. Drip irrigation methods range from very high-tech and computerized to low-tech and labor-intensive. Lower water pressures are usually needed than for most other types of systems, with the exception of low-energy center pivot systems and surface irrigation systems, and the system can be designed for uniformity throughout a field or for precise water delivery to individual plants in a landscape containing a mix of plant species. Although it is difficult to regulate pressure on steep slopes, pressure compensating emitters are available, so the field does not have to be level. High-tech solutions involve precisely calibrated emitters located along lines of tubing that extend from a computerized set of valves. Sprinkler irrigation In sprinkler or overhead irrigation, water is piped to one or more central locations within the field and distributed by overhead high-pressure sprinklers or guns. A system using sprinklers, sprays, or guns mounted overhead on permanently installed risers is often referred to as a solid-set irrigation system. Higher pressure sprinklers that rotate are called rotors and are driven by a ball drive, gear drive, or impact mechanism. Rotors can be designed to rotate in a full or partial circle. Guns are similar to rotors, except that they generally operate at very high pressures of 275 to 900 kPa (40 to 130 psi) and flows of 3 to 76 L/s (50 to 1200 US gal/min), usually with nozzle diameters in the range of 10 to 50 mm (0.5 to 1.9 in). Guns are used not only for irrigation, but also for industrial applications such as dust suppression and logging. Sprinklers can also be mounted on moving platforms connected to the water source by a hose. Automatically moving wheeled systems known as traveling sprinklers may irrigate areas such as small farms, sports fields, parks, pastures, and cemeteries unattended. Most of these use a length of polyethylene tubing wound on a steel drum. As the tubing is wound on the drum powered by the irrigation water or a small gas engine, the sprinkler is pulled across the field. When the sprinkler arrives back at the reel the system shuts off. This type of system is known to most people as a "waterreel" traveling irrigation sprinkler and they are used extensively for dust suppression, irrigation, and land application of waste water. Other travelers use a flat rubber hose that is dragged along behind while the sprinkler platform is pulled by a cable. Center pivot Center pivot irrigation is a form of sprinkler irrigation utilising several segments of pipe (usually galvanized steel or aluminium) joined and supported by trusses, mounted on wheeled towers with sprinklers positioned along its length. The system moves in a circular pattern and is fed with water from the pivot point at the center of the arc. These systems are found and used in all parts of the world and allow irrigation of all types of terrain. Newer systems have drop sprinkler heads as shown in the image that follows. most center pivot systems have drops hanging from a U-shaped pipe attached at the top of the pipe with sprinkler heads that are positioned a few feet (at most) above the crop, thus limiting evaporative losses. Drops can also be used with drag hoses or bubblers that deposit the water directly on the ground between crops. Crops are often planted in a circle to conform to the center pivot. This type of system is known as LEPA (Low Energy Precision Application). Originally, most center pivots were water-powered. These were replaced by hydraulic systems (T-L Irrigation) and electric-motor-driven systems (Reinke, Valley, Zimmatic). Many modern pivots feature GPS devices. Irrigation by lateral move (side roll, wheel line, wheelmove) A series of pipes, each with a wheel of about 1.5 m diameter permanently affixed to its midpoint, and sprinklers along its length, are coupled together. Water is supplied at one end using a large hose. After sufficient irrigation has been applied to one strip of the field, the hose is removed, the water drained from the system, and the assembly rolled either by hand or with a purpose-built mechanism, so that the sprinklers are moved to a different position across the field. The hose is reconnected. The process is repeated in a pattern until the whole field has been irrigated. This system is less expensive to install than a center pivot, but much more labor-intensive to operate – it does not travel automatically across the field: it applies water in a stationary strip, must be drained, and then rolled to a new strip. Most systems use 100 or 130 mm (4 or 5 inch) diameter aluminum pipe. The pipe doubles both as water transport and as an axle for rotating all the wheels. A drive system (often found near the centre of the wheel line) rotates the clamped-together pipe sections as a single axle, rolling the whole wheel line. Manual adjustment of individual wheel positions may be necessary if the system becomes misaligned. Wheel line systems are limited in the amount of water they can carry, and limited in the height of crops that can be irrigated. One useful feature of a lateral move system is that it consists of sections that can be easily disconnected, adapting to field shape as the line is moved. They are most often used for small, rectilinear, or oddly-shaped fields, hilly or mountainous regions, or in regions where labor is inexpensive. Lawn sprinkler systems A lawn sprinkler system is permanently installed, as opposed to a hose-end sprinkler, which is portable. Sprinkler systems are installed in residential lawns, in commercial landscapes, for churches and schools, in public parks and cemeteries, and on golf courses. Most of the components of these irrigation systems are hidden under ground, since aesthetics are important in a landscape. A typical lawn sprinkler system will consist of one or more zones, limited in size by the capacity of the water source. Each zone will cover a designated portion of the landscape. Sections of the landscape will usually be divided by microclimate, type of plant material, and type of irrigation equipment. A landscape irrigation system may also include zones containing drip irrigation, bubblers, or other types of equipment besides sprinklers. Although manual systems are still used, most lawn sprinkler systems may be operated automatically using an irrigation controller, sometimes called a clock or timer. Most automatic systems employ electric solenoid valves. Each zone has one or more of these valves that are wired to the controller. When the controller sends power to the valve, the valve opens, allowing water to flow to the sprinklers in that zone. There are two main types of sprinklers used in lawn irrigation, pop-up spray heads and rotors. Spray heads have a fixed spray pattern, while rotors have one or more streams that rotate. Spray heads are used to cover smaller areas, while rotors are used for larger areas. Golf course rotors are sometimes so large that a single sprinkler is combined with a valve and called a 'valve in head'. When used in a turf area, the sprinklers are installed with the top of the head flush with the ground surface. When the system is pressurized, the head will pop up out of the ground and water the desired area until the valve closes and shuts off that zone. Once there is no more pressure in the lateral line, the sprinkler head will retract back into the ground. In flower beds or shrub areas, sprinklers may be mounted on above ground risers or even taller pop-up sprinklers may be used and installed flush as in a lawn area. Hose-end sprinklers Hose-end sprinklers are devices attached to the end of a garden hose, used for watering lawns, gardens, or plants. They come in a variety of designs and styles, allowing you to adjust the water flow, pattern, and range for efficient irrigation. Some common types of hose-end sprinklers include: Oscillating Sprinklers: These spray water back and forth in a rectangular or square pattern. They are good for covering large, flat areas evenly. Impact (or Pulsating) Sprinklers: These create a rotating, pulsating spray, which can cover a circular or semi-circular area. They are useful for watering large lawns. Stationary Sprinklers: These have a fixed spray pattern and are best for smaller areas or gardens. Rotary Sprinklers: These use spinning arms to distribute water in a circular or semi-circular pattern. Traveling Sprinklers: These move along the hose path on their own, watering as they go, ideal for covering long, narrow spaces. Each type offers different advantages based on garden size and shape, water pressure, and specific watering needs. Subirrigation Subirrigation has been used for many years in field crops in areas with high water tables. It is a method of artificially raising the water table to allow the soil to be moistened from below the plants' root zone. Often those systems are located on permanent grasslands in lowlands or river valleys and combined with drainage infrastructure. A system of pumping stations, canals, weirs and gates allows it to increase or decrease the water level in a network of ditches and thereby control the water table. Subirrigation is also used in the commercial greenhouse production, usually for potted plants. Water is delivered from below, absorbed by upwards, and the excess collected for recycling. Typically, a solution of water and nutrients floods a container or flows through a trough for a short period of time, 10–20 minutes, and is then pumped back into a holding tank for reuse. Sub-irrigation in greenhouses requires fairly sophisticated, expensive equipment and management. Advantages are water and nutrient conservation, and labor savings through reduced system maintenance and automation. It is similar in principle and action to subsurface basin irrigation. Another type of subirrigation is the self-watering container, also known as a sub-irrigated planter. This consists of a planter suspended over a reservoir with some type of wicking material such as a polyester rope. The water is drawn up the wick through capillary action. A similar technique is the wicking bed; this too uses capillary action. Efficiency Modern irrigation methods are efficient enough to supply the entire field uniformly with water, so that each plant has the amount of water it needs, neither too much nor too little. Water use efficiency in the field can be determined as follows: Field Water Efficiency (%) = (Water Transpired by Crop ÷ Water Applied to Field) x 100 Increased irrigation efficiency has a number of positive outcomes for the farmer, the community and the wider environment. Low application efficiency infers that the amount of water applied to the field is in excess of the crop or field requirements. Increasing the application efficiency means that the amount of crop produced per unit of water increases. Improved efficiency may either be achieved by applying less water to an existing field or by using water more wisely thereby achieving higher yields in the same area of land. In some parts of the world, farmers are charged for irrigation water hence over-application has a direct financial cost to the farmer. Irrigation often requires pumping energy (either electricity or fossil fuel) to deliver water to the field or supply the correct operating pressure. Hence increased efficiency will reduce both the water cost and energy cost per unit of agricultural production. A reduction of water use on one field may mean that the farmer is able to irrigate a larger area of land, increasing total agricultural production. Low efficiency usually means that excess water is lost through seepage or runoff, both of which can result in loss of crop nutrients or pesticides with potential adverse impacts on the surrounding environment. Improving the efficiency of irrigation is usually achieved in one of two ways, either by improving the system design or by optimising the irrigation management. Improving system design includes conversion from one form of irrigation to another (e.g. from furrow to drip irrigation) and also through small changes in the current system (for example changing flowrates and operating pressures). Irrigation management refers to the scheduling of irrigation events and decisions around how much water is applied. Challenges Environmental impacts Negative impacts frequently accompany extensive irrigation. Some projects which diverted surface water for irrigation dried up the water sources, which led to a more extreme regional climate. Projects that relied on groundwater and pumped too much from underground aquifers created subsidence and salinization. Salinization of irrigation water in turn damaged the crops and seeped into drinking water. Pests and pathogens also thrived in the irrigation canals or ponds full of still water, which created regional outbreaks of diseases like malaria and schistosomiasis. Governments also used irrigation schemes to encourage migration, especially of more desirable populations into an area. Additionally, some of these large nationwide schemes failed to pay off at all, costing more than any benefit gained from increased crop yields. Overdrafting (depletion) of underground aquifers: In the mid-20th century, the advent of diesel and electric motors led to systems that could pump groundwater out of major aquifers faster than drainage basins could refill them. This can lead to permanent loss of aquifer capacity, decreased water quality, ground subsidence, and other problems. The future of food production in such areas as the North China Plain, the Punjab region in India and Pakistan, and the Great Plains of the US is threatened by this phenomenon. Technical challenges Irrigation schemes involve solving numerous engineering and economic problems while minimizing negative environmental consequences. Such problems include: Ground subsidence (e.g. New Orleans, Louisiana) Underirrigation or irrigation giving only just enough water for the plant (e.g. in drip line irrigation) gives poor soil salinity control which leads to increased soil salinity with consequent buildup of toxic salts on soil surface in areas with high evaporation. This requires either leaching to remove these salts and a method of drainage to carry the salts away. When using drip lines, the leaching is best done regularly at certain intervals (with only a slight excess of water), so that the salt is flushed back under the plant's roots. Overirrigation because of poor distribution uniformity or management wastes water, chemicals, and may lead to water pollution. Deep drainage (from over-irrigation) may result in rising water tables which in some instances will lead to problems of irrigation salinity requiring watertable control by some form of subsurface land drainage. For example in Australia, over-abstraction of fresh water for intensive irrigation activities has caused 33% of the land area to be at risk of salination. Drainage front instability, also known as viscous fingering, where an unstable drainage front results in a pattern of fingers and viscous entrapped saturated zones. Irrigation with saline or high-sodium water may damage soil structure owing to the formation of alkaline soil. Clogging of filters: algae can clog filters, drip installations, and nozzles. Chlorination, algaecide, UV and ultrasonic methods can be used for algae control in irrigation systems. Complications in accurately measuring irrigation performance which changes over time and space using measures such as productivity, efficiency, equity and adequacy. Macro-irrigation, typical in intensive agriculture, where also are used agrochemicals, often causes eutrophication. Social aspects Competition for surface water rights and territory defense. Assisting smallholders in sustainably and collectively managing irrigation technology and changes in technology. History Ancient history Archaeological investigation has found evidence of irrigation in areas lacking sufficient natural rainfall to support crops for rainfed agriculture. Some of the earliest known use of the technology dates to the 6th millennium BCE in Khuzistan in the south-west of Iran. The site of Choga Mami, in present-day Iraq on the border with Iran, is believed to be the earliest to show the first canal irrigation in operation at about 6000 BCE. Irrigation was used as a means of manipulation of water in the alluvial plains of the Indus valley civilization, the application of which is estimated to have begun around 4500 BCE and drastically increased the size and prosperity of their agricultural settlements. The Indus Valley Civilization developed sophisticated irrigation and water-storage systems, including artificial reservoirs at Girnar dated to 3000 BCE, and an early canal irrigation system from 2600 BCE. Large-scale agriculture was practiced, with an extensive network of canals used for the purpose of irrigation. Farmers in the Mesopotamian plain used irrigation from at least the third-millennium BCE. They developed perennial irrigation, regularly watering crops throughout the growing season by coaxing water through a matrix of small channels formed in the field. Ancient Egyptians practiced basin irrigation using the flooding of the Nile to inundate land plots which had been surrounded by dikes. The flood water remained until the fertile sediment had settled before the engineers returned the surplus to the watercourse. There is evidence of the ancient Egyptian pharaoh Amenemhet III in the twelfth dynasty (about 1800 BCE) using the natural lake of the Faiyum Oasis as a reservoir to store surpluses of water for use during dry seasons. The lake swelled annually from the flooding of the Nile. The Ancient Nubians developed a form of irrigation by using a waterwheel-like device called a sakia. Irrigation began in Nubia between the third and second millennia BCE. It largely depended upon the flood waters that would flow through the Nile River and other rivers in what is now the Sudan. In sub-Saharan Africa, irrigation reached the Niger River region cultures and civilizations by the first or second millennium BCE and was based on wet-season flooding and water harvesting. Evidence of terrace irrigation occurs in pre-Columbian America, early Syria, India, and China. In the Zana Valley of the Andes Mountains in Peru, archaeologists have found remains of three irrigation canals radiocarbon-dated from the 4th millennium BCE, the 3rd millennium BCE and the 9th century CE. These canals provide the earliest record of irrigation in the New World. Traces of a canal possibly dating from the 5th millennium BCE were found under the 4th-millennium canal. Ancient Persia (modern-day Iran) used irrigation as far back as the 6th millennium BCE to grow barley in areas with insufficient natural rainfall. The Qanats, developed in ancient Persia about 800 BCE, are among the oldest known irrigation methods still in use today. They are now found in Asia, the Middle East, and North Africa. The system comprises a network of vertical wells and gently sloping tunnels driven into the sides of cliffs and steep hills to tap groundwater. The noria, a water wheel with clay pots around the rim powered by the flow of the stream (or by animals where the water source was still), first came into use at about this time among Roman settlers in North Africa. By 150 BCE, the pots were fitted with valves to allow smoother filling as they were forced into the water. Sri Lanka The irrigation works of ancient Sri Lanka, the earliest dating from about 300 BCE in the reign of King Pandukabhaya, and under continuous development for the next thousand years, were one of the most complex irrigation systems of the ancient world. In addition to underground canals, the Sinhalese were the first to build completely artificial reservoirs to store water. These reservoirs and canal systems were used primarily to irrigate paddy fields, which require a lot of water to cultivate. Most of these irrigation systems still exist undamaged up to now, in Anuradhapura and Polonnaruwa, because of the advanced and precise engineering. The system was extensively restored and further extended during the reign of King Parakrama Bahu (1153–1186 CE). China The oldest known hydraulic engineers of China were Sunshu Ao (6th century BCE) of the Spring and Autumn period and Ximen Bao (5th century BCE) of the Warring States period, both of whom worked on large irrigation projects. In the Sichuan region belonging to the state of Qin of ancient China, the Dujiangyan Irrigation System devised by the Qin Chinese hydrologist and irrigation engineer Li Bing was built in 256 BCE to irrigate a vast area of farmland that today still supplies water. By the 2nd century CE, during the Han dynasty, the Chinese also used chain pumps which lifted water from a lower elevation to a higher one. These were powered by manual foot-pedal, hydraulic waterwheels, or rotating mechanical wheels pulled by oxen. The water was used for public works, providing water for urban residential quarters and palace gardens, but mostly for irrigation of farmland canals and channels in the fields. Korea Korea, Jang Yeong-sil, a Korean engineer of the Joseon dynasty, under the active direction of the king, Sejong the Great, invented the world's first rain gauge, uryanggye () in 1441. It was installed in irrigation tanks as part of a nationwide system to measure and collect rainfall for agricultural applications. Planners and farmers could better use the information gathered in the survey with this instrument. North America The earliest agricultural irrigation canal system known in the area of the present-day United States dates to between 1200 BCE and 800 BCE and was discovered by Desert Archaeology, Inc. in Marana, Arizona (adjacent to Tucson) in 2009. The irrigation-canal system predates the Hohokam culture by two thousand years and belongs to an unidentified culture. In North America, the Hohokam were the only culture known to rely on irrigation canals to water their crops, and their irrigation systems supported the largest population in the Southwest by CE 1300. The Hohokam constructed various simple canals combined with weirs in their various agricultural pursuits. Between the 7th and 14th centuries, they built and maintained extensive irrigation networks along the lower Salt and middle Gila Rivers that rivaled the complexity of those used in the ancient Near East, Egypt, and China. These were constructed using relatively simple excavation tools, without the benefit of advanced engineering technologies, and achieved drops of a few feet per mile, balancing erosion and siltation. The Hohokam cultivated cotton, tobacco, maize, beans, and squash varieties and harvested an assortment of wild plants. Late in the Hohokam Chronological Sequence, they used extensive dry-farming systems, primarily to grow agave for food and fiber. Their reliance on agricultural strategies based on canal irrigation, vital in their less-than-hospitable desert environment and arid climate, provided the basis for the aggregation of rural populations into stable urban centers. South America The oldest known irrigation canals in the Americas are in the desert of northern Peru in the Zaña Valley near the hamlet of Nanchoc. The canals have been radiocarbon dated to at least 3400 BCE and possibly as old as 4700 BCE. The canals at that time irrigated crops such as peanuts, squash, manioc, chenopods, a relative of Quinoa, and later maize. Modern history The scale of global irrigation increased dramatically over the 20th century. In 1800, 8 million hectares were irrigated; in 1950, 94 million hectares, and in 1990, 235 million hectares. By 1990, 30% of the global food production came from irrigated land. Irrigation techniques across the globe included canals redirecting surface water, groundwater pumping, and diverting water from dams. National governments led most irrigation schemes within their borders, but private investors and other nations, especially the United States, China, and European countries like the United Kingdom, funded and organized some schemes within other nations. Irrigation enabled the production of more crops, especially commodity crops in areas that otherwise could not support them. Countries frequently invested in irrigation to increase wheat, rice, or cotton production, often with the overarching goal of increasing self-sufficiency. In the 20th century, global anxiety, specifically about the American cotton monopoly, fueled many empirical irrigation projects: Britain began developing irrigation in India, the Ottomans in Egypt, the French in Algeria, the Portuguese in Angola, the Germans in Togo, and Soviets in Central Asia. Negative impacts frequently accompany extensive irrigation. Some projects that diverted surface water for irrigation dried up the water sources, which led to a more extreme regional climate. Projects that relied on groundwater and pumped too much from underground aquifers created subsidence and salinization. Salinization of irrigation water damaged the crops and seeped into drinking water. Pests and pathogens also thrived in the irrigation canals or ponds full of still water, which created regional outbreaks of diseases like malaria and schistosomiasis. Governments also used irrigation schemes to encourage migration, especially of more desirable populations into an area. Additionally, some of these large nationwide schemes failed to pay off at all, costing more than any benefit gained from increased crop yields. American West Irrigated land in the United States increased from 300,000 acres in 1880 to 4.1 million in 1890 to 7.3 million in 1900. Two thirds of this irrigation sources from groundwater or small ponds and reservoirs, while the other one third comes from large dams. One of the main attractions of irrigation in the West was its increased dependability compared to rainfall-watered agriculture in the East. Proponents argued that farmers with a dependable water supply could more easily get loans from bankers interested in this more predictable farming model. Most irrigation in the Great Plains region derived from underground aquifers. Euro-American farmers who colonized the region in the 19th century tried to grow the commodity crops that they were used to, like wheat, corn, and alfalfa, but rainfall stifled their growing capacity. Between the late 1800s and the 1930s, farmers used wind-powered pumps to draw groundwater. These windpumps had limited power, but the development of gas-powered pumps in the mid-1930s pushed wells deep into the Ogallala Aquifer. Farmers irrigated fields by laying pipes across the field with sprinklers at intervals, a labor-intensive process, until the advent of the center-pivot sprinkler after WWII, which made irrigation significantly easier. By the 1970s farmers drained the aquifer ten times faster than it could recharge, and by 1993 they had removed half of the accessible water. Large-scale federal funding and intervention pushed through the majority of irrigation projects in the West, especially in California, Colorado, Arizona, and Nevada. At first, plans to increase irrigated farmland, largely by giving land to farmers and asking them to find water, failed across the board. Congress passed the Desert Land Act in 1877 and the Carey Act in 1894, which only marginally increased irrigation. Only in 1902 did Congress pass the National Reclamation Act, which channeled money from the sale of western public lands, in parcels up to 160 acres large, into irrigation projects on public or private land in the arid West. The Congressmen who passed the law and their wealthy supporters supported Western irrigation because it would increase American exports, ‘reclaim’ the West, and push the Eastern poor out West for a better life. While the National Reclamation Act was the most successful piece of federal irrigation legislation, the implementation of the act did not go as planned. The Reclamation Service chose to push most of the Act's money toward construction rather than settlement, so the Service overwhelmingly prioritized building large dams like the Hoover Dam. Over the 20th century, Congress and state governments grew more frustrated with the Reclamation Service and the irrigation schemes. Frederick Newell, head of the Reclamation Service, proving uncompromising and challenging to work with, falling crop prices, resistance to delay debt payments, and refusal to begin new projects until the completion of old ones all contributed. The Reclamation Extension Act of 1914, transferring a significant amount of irrigation decision-making power regarding irrigation projects from the Reclamation Service to Congress, was in many ways a result of increasing political unpopularity of the Reclamation Service. In the lower Colorado Basin of Arizona, Colorado, and Nevada, the states derive irrigation water largely from rivers, especially the Colorado River, which irrigates more than 4.5 million acres of land, with a less significant amount coming from groundwater. In the 1952 case Arizona v. California, Arizona sued California for increased access to the Colorado River, under the grounds that their groundwater supply could not sustain their almost entirely irrigation-based agricultural economy, which they won. California, which began irrigating in earnest in the 1870s in San Joaquin Valley, had passed the Wright Act of 1887 permitting agricultural communities to construct and operate needed irrigation works. The Colorado River also irrigates large fields in California's Imperial Valley, fed by the National Reclamation Act-built All-American Canal. Soviet Central Asia When the Bolsheviks conquered Central Asia in 1917, the native Kazakhs, Uzbeks, and Turkmens used minimal irrigation. The Slavic immigrants pushed into the area by the Tsarist government brought their irrigation methods, including waterwheels, the use of rice paddies to restore salted land, and underground irrigation channels. Russians dismissed these techniques as crude and inefficient. Despite this, tsarist officials maintained these systems through the late 19th century without other solutions. Before conquering the area, the Russian government accepted a 1911 American proposal to send hydraulic experts to Central Asia to investigate the potential for large-scale irrigation. A 1918 decree by Lenin then encouraged irrigation development in the region, which began in the 1930s. When it did, Stalin and other Soviet leaders prioritized large-scale, ambitious hydraulic projects, especially along the Volga River. The Soviet irrigation push stemmed mainly from their late 19th century fears of the American cotton monopoly and subsequent desire to achieve cotton self-sufficiency. They had built up their textile manufacturing industry in the 19th century, requiring increased cotton and irrigation, as the region did not receive enough rainfall to support cotton farming. The Russians built dams on the Don and Kuban Rivers for irrigation, removing freshwater flow from the Sea of Azov and making it much saltier. Depletion and salinization scourged other areas of the Russian irrigation project. In the 1950s, Soviet officials began also diverting the Syr Darya and the Amu Darya, which fed the Aral Sea. Before diversion, the rivers delivered of water to the Aral Sea per year, but after, they only delivered . Because of its reduced inflow, the Aral Sea covered less than half of its original seabed, which made the regional climate more extreme and created airborne salinization, lowering nearby crop yields. By 1975, the USSR used eight times as much water as they had in 1913, mostly for irrigation. Russia's expansion of irrigation began to decrease in the late 1980s, and irrigated hectares in Central Asia capped out at 7 million. Mikhail Gorbachev killed a proposed plan to reverse the Ob and Yenisei for irrigation in 1986, and the breakup of the USSR in 1991 ended Russian investment in Central Asian cotton irrigation. Africa Different irrigation schemes with various goals and success rates have been implemented across Africa in the 20th century but have all been influenced by colonial forces. The Tana River Irrigation Scheme in eastern Kenya, completed between 1948 and 1963, opened up new lands for agriculture. The Kenyan government attempted to resettle the area with detainees from the Mau Mau uprising. Italian oil drillers discovered Libya's underground water resources during the Italian colonization of Libya. This water lay dormant until 1969, when Muammar al-Gaddafi and American Armand Hammer built the Great Man-Made River to deliver the Saharan water to the coast. The water largely contributed to irrigation but cost four to ten times more than the crops it produced were worth. In 1912, the Union of South Africa created an irrigation department and began investing in water storage infrastructure and irrigation. The government used irrigation and dam-building to further social goals like poverty relief by creating construction jobs for poor whites and by creating irrigation schemes to increase white farming. One of their first significant irrigation projects was the Hartbeespoort Dam, begun in 1916 to elevate the living conditions of the ‘poor whites’ in the region and eventually completed as a ‘whites only’ employment opportunity. The Pretoria irrigation scheme, Kammanassie project, and Buchuberg irrigation scheme on the Orange River all followed in the same vein in the 1920s and 30s. In Egypt, modern irrigation began with Muhammad Ali Pasha in the mid-1800s, who sought to achieve Egyptian independence from the Ottomans through increased trade with Europe—specifically cotton exportation. His administration proposed replacing the traditional Nile basin irrigation, which took advantage of the annual ebb and flow of the Nile, with irrigation barrages in the lower Nile, which better suited cotton production. Egypt devoted 105,000 ha to cotton in 1861, which increased fivefold by 1865. Most of their exports were shipped to England, and the United-States-Civil-War-induced cotton scarcity in the 1860s cemented Egypt as England's cotton producer. As the Egyptian economy became more dependent on cotton in the 20th century, controlling even small Nile floods became more important. Cotton production was more at risk of destruction than more common crops like barley or wheat. After the British occupation of Egypt in 1882, the British intensified the conversion to perennial irrigation with the construction of the Delta Barrage, the Assiut Barrage, and the first Aswan Dam. Perennial irrigation decreased local control over water and made traditional subsistence farming or the farming of other crops incredibly difficult, eventually contributing to widespread peasant bankruptcy and the 1879-1882 ‘Urabi revolt. Examples by country Gallery
Technology
Horticultural techniques
null
42405
https://en.wikipedia.org/wiki/Ichthyology
Ichthyology
Ichthyology is the branch of zoology devoted to the study of fish, including bony fish (Osteichthyes), cartilaginous fish (Chondrichthyes), and jawless fish (Agnatha). According to FishBase, 33,400 species of fish had been described as of October 2016, with approximately 250 new species described each year. Etymology The word is derived from the Greek words ἰχθύς, ikhthus, meaning "fish"; and λογία, logia, meaning "to study". History The study of fish dates from the Upper Paleolithic Revolution (with the advent of "high culture"). The science of ichthyology was developed in several interconnecting epochs, each with various significant advancements. The study of fish receives its origins from humans' desire to feed, clothe, and equip themselves with useful implements. According to Michael Barton, a prominent ichthyologist and professor at Centre College, "the earliest ichthyologists were hunters and gatherers who had learned how to obtain the most useful fish, where to obtain them in abundance, and at what times they might be the most available". Early cultures manifested these insights in abstract and identifiable artistic expressions. 1500 BC–40 AD Informal, scientific descriptions of fish are represented within the Judeo-Christian tradition. The Old Testament laws of kashrut forbade the consumption of fish without scales or appendages. Theologians and ichthyologists believe that the apostle Peter and his contemporaries harvested the fish that are today sold in modern industry along the Sea of Galilee, presently known as Lake Kinneret. These fish include cyprinids of the genera Barbus and Mirogrex, cichlids of the genus Sarotherodon, and Mugil cephalus of the family Mugilidae. 335 BC–80 AD Aristotle incorporated ichthyology into formal scientific study. Between 333 and 322 BC, he provided the earliest taxonomic classification of fish, accurately describing 117 species of Mediterranean fish. Furthermore, Aristotle documented anatomical and behavioral differences between fish and marine mammals. After his death, some of his pupils continued his ichthyological research. Theophrastus, for example, composed a treatise on amphibious fish. The Romans, although less devoted to science, wrote extensively about fish. Pliny the Elder, a notable Roman naturalist, compiled the ichthyological works of indigenous Greeks, including verifiable and ambiguous peculiarities such as the sawfish and mermaid, respectively. Pliny's documentation was the last significant contribution to ichthyology until the European Renaissance. European Renaissance The writings of three 16th-century scholars, Hippolito Salviani, Pierre Belon, and Guillaume Rondelet, signify the conception of modern ichthyology. The investigations of these individuals were based upon actual research in comparison to ancient recitations. This property popularized and emphasized these discoveries. Despite their prominence, Rondelet's De Piscibus Marinis is regarded as the most influential, identifying 244 species of fish. 16th–17th century The incremental alterations in navigation and shipbuilding throughout the Renaissance marked the commencement of a new epoch in ichthyology. The Renaissance culminated with the era of exploration and colonization, and upon the cosmopolitan interest in navigation came the specialization in naturalism. Georg Marcgrave of Saxony composed the Naturalis Brasilae in 1648. This document contained a description of 100 species of fish indigenous to the Brazilian coastline. In 1686, John Ray and Francis Willughby collaboratively published Historia Piscium, a scientific manuscript containing 420 species of fish, 178 of these newly discovered. The fish contained within this informative literature were arranged in a provisional system of classification. The classification used within the Historia Piscium was further developed by Carl Linnaeus, the "father of modern taxonomy". His taxonomic approach became the systematic approach to the study of organisms, including fish. Linnaeus was a professor at the University of Uppsala and an eminent botanist; however, one of his colleagues, Peter Artedi, earned the title "father of ichthyology" through his indispensable advancements. Artedi contributed to Linnaeus's refinement of the principles of taxonomy. Furthermore, he recognized five additional orders of fish: Malacopterygii, Acanthopterygii, Branchiostegi, Chondropterygii, and Plagiuri. Artedi developed standard methods for making counts and measurements of anatomical features that are modernly exploited. Another associate of Linnaeus, Albertus Seba, was a prosperous pharmacist from Amsterdam. Seba assembled a cabinet, or collection, of fish. He invited Artedi to use this assortment of fish; in 1735, Artedi fell into an Amsterdam canal and drowned at the age of 30. Linnaeus posthumously published Artedi's manuscripts as Ichthyologia, sive Opera Omnia de Piscibus (1738). His refinement of taxonomy culminated in the development of the binomial nomenclature, which is in use by contemporary ichthyologists. Furthermore, he revised the orders introduced by Artedi, placing significance on pelvic fins. Fish lacking this appendage were placed within the order Apodes; fish having abdominal, thoracic, or jugular pelvic fins were termed Abdominales, Thoracici, and Jugulares, respectively. However, these alterations were not grounded within evolutionary theory. Therefore, over a century was needed for Charles Darwin to provide the intellectual foundation needed to perceive that the degree of similarity in taxonomic features was a consequence of phylogenetic relationships. Modern era Close to the dawn of the 19th century, Marcus Elieser Bloch of Berlin and Georges Cuvier of Paris made attempts to consolidate the knowledge of ichthyology. Cuvier summarized all of the available information in his monumental Histoire Naturelle des Poissons. This manuscript was published between 1828 and 1849 in a 22-volume series. This document describes 4,514 species of fish, 2,311 of these new to science. It remains one of the most ambitious treatises of the modern world. Scientific exploration of the Americas advanced knowledge of the remarkable diversity of fish. Charles Alexandre Lesueur was a student of Cuvier. He made a cabinet of fish dwelling within the Great Lakes and Saint Lawrence River regions. Adventurous individuals such as John James Audubon and Constantine Samuel Rafinesque figure in the faunal documentation of North America. They often traveled with one another. Rafinesque wrote Ichthyologic Ohiensis in 1820. In addition, Louis Agassiz of Switzerland established his reputation through the study of freshwater fish and the first comprehensive treatment of palaeoichthyology, Poisson Fossil's. In the 1840s, Agassiz moved to the United States, where he taught at Harvard University until his death in 1873. Albert Günther published his Catalogue of the Fish of the British Museum between 1859 and 1870, describing over 6,800 species and mentioning another 1,700. Generally considered one of the most influential ichthyologists, David Starr Jordan wrote 650 articles and books on the subject and served as president of Indiana University and Stanford University. Modern publications Organizations Notable ichthyologists Members of this list meet one or more of the following criteria: 1) Author of 50 or more fish taxon names, 2) Author of major reference work in ichthyology, 3) Founder of major journal or museum, 4) Person most notable for other reasons who has also worked in ichthyology. Alexander Emanuel Agassiz Louis Agassiz Emperor Akihito of Japan Gerald R. Allen Peter Artedi Herbert R. Axelrod William O. Ayres, California Spencer Fullerton Baird Tarleton Hoffman Bean Lev Berg, Russia Henry Bryant Bigelow Pieter Bleeker, East Indies Marcus Elieser Bloch George Albert Boulenger Jean Cadenat Pierre Carbonnier Eugenie Clark Leonard Compagno Edward Drinker Cope Georges Cuvier Francis Day, India Francis Buchanan-Hamilton, Scottish Carl H. Eigenmann Rosa Smith Eigenmann William N. Eschmeyer Barton Warren Evermann Henry Weed Fowler Joseph Paul Gaimard Samuel Garman Charles Henry Gilbert Theodore Nicholas Gill Charles Frédéric Girard George Brown Goode Albert Günther Albert William Herre Carl L. Hubbs David Starr Jordan Maurice Kottelat, Swiss Bernard Germain de Lacépède Carl Linnaeus Seth Eugene Meek George S. Myers Joseph S. Nelson, Fishes of the World John Treadwell Nichols, China, founder of Copeia John Roxborough Norman Peter Simon Pallas Wilhelm Peters Felipe Poey Jean René Constant Quoy Constantine Samuel Rafinesque John Ernest Randall Charles Tate Regan John Richardson Raúl Adolfo Ringuelet Eduard Rüppell Johann Gottlob Schneider H.M. Smith J.L.B. Smith Edwin Chapin Starks Franz Steindachner Royal D. Suttkus Frank Talbot Shigeho Tanaka Ethelwynn Trewavas, English Achille Valenciennes Johann Julius Walbaum Gilbert Percy Whitley Francis Willughby Stan Wood William Yarrell Paleoichthyologists Hans C. Bjerring Erik Jarvik Erik Stensiö Non-academic ichthyologists Sakana-kun
Biology and health sciences
Basics_2
Biology
42418
https://en.wikipedia.org/wiki/Electrical%20cable
Electrical cable
An electrical cable is an assembly of one or more wires running side by side or bundled, which is used as an electrical conductor to carry electric current. Electrical cables are used to connect two or more devices, enabling the transfer of electrical signals, power, or both from one device to the other. Physically, an electrical cable is an assembly consisting of one or more conductors with their own insulations and optional screens, individual coverings, assembly protection and protective covering. One or more electrical cables and their corresponding connectors may be formed into a cable assembly, which is not necessarily suitable for connecting two devices but can be a partial product (e.g. to be soldered onto a printed circuit board with a connector mounted to the housing). Cable assemblies can also take the form of a cable tree or cable harness, used to connect many terminals together. Uses Electrical cables are used to connect two or more devices, enabling the transfer of electrical signals or power from one device to the other. Long-distance communication takes place over undersea communication cables. Power cables are used for bulk transmission of alternating and direct current power, especially using high-voltage cable. Electrical cables are extensively used in building wiring for lighting, power and control circuits permanently installed in buildings. Since all the circuit conductors required can be installed in a cable at one time, installation labor is saved compared to certain other wiring methods. Physically, an electrical cable is an assembly consisting of one or more conductors with their own insulations and optional screens, individual coverings, assembly protection and protective coverings. Electrical cables may be made more flexible by stranding the wires. In this process, smaller individual wires are twisted or braided together to produce larger wires that are more flexible than solid wires of similar size. Bunching small wires before concentric stranding adds the most flexibility. Copper wires in a cable may be bare, or they may be plated with a thin layer of another metal, most often tin but sometimes gold, silver or some other material. Tin, gold, and silver are much less prone to oxidation than copper, which may lengthen wire life, and makes soldering easier. Tinning is also used to provide lubrication between strands. Tinning was used to help removal of rubber insulation. Tight lays during stranding makes the cable extensible (CBA – as in telephone handset cords). In the 19th century and early 20th century, electrical cable was often insulated using cloth, rubber or paper. Plastic materials are generally used today, except for high-reliability power cables. The first thermoplastic used was gutta-percha (a natural latex) which was found useful for underwater cables in the 19th century. The first, and still very common, man-made plastic used for cable insulation was polyethylene. This was invented in 1930, but not available outside military use until after World War 2 during which a telegraph cable using it was laid across the English Channel to support troops following D-Day. Cables can be securely fastened and organized, such as by using trunking, cable trays, cable ties or cable lacing. Continuous-flex or flexible cables used in moving applications within cable carriers can be secured using strain relief devices or cable ties. Characteristics Any current-carrying conductor, including a cable, radiates an electromagnetic field. Likewise, any conductor or cable will pick up energy from any existing electromagnetic field around it. These effects are often undesirable, in the first case amounting to unwanted transmission of energy which may adversely affect nearby equipment or other parts of the same piece of equipment; and in the second case, unwanted pickup of noise which may mask the desired signal being carried by the cable, or, if the cable is carrying power supply or control voltages, pollute them to such an extent as to cause equipment malfunction. The first solution to these problems is to keep cable lengths in buildings short since pick up and transmission are essentially proportional to the length of the cable. The second solution is to route cables away from trouble. Beyond this, there are particular cable designs that minimize electromagnetic pickup and transmission. Three of the principal design techniques are shielding, coaxial geometry, and twisted-pair geometry. Shielding makes use of the electrical principle of the Faraday cage. The cable is encased for its entire length in foil or wire mesh. All wires running inside this shielding layer will be to a large extent decoupled from external electrical fields, particularly if the shield is connected to a point of constant voltage, such as earth or ground. Simple shielding of this type is not greatly effective against low-frequency magnetic fields, however - such as magnetic "hum" from a nearby power transformer. A grounded shield on cables operating at 2.5 kV or more gathers leakage current and capacitive current, protecting people from electric shock and equalizing stress on the cable insulation. Coaxial design helps to further reduce low-frequency magnetic transmission and pickup. In this design the foil or mesh shield has a circular cross section and the inner conductor is exactly at its center. This causes the voltages induced by a magnetic field between the shield and the core conductor to consist of two nearly equal magnitudes which cancel each other. A twisted pair has two wires of a cable twisted around each other. This can be demonstrated by putting one end of a pair of wires in a hand drill and turning while maintaining moderate tension on the line. Where the interfering signal has a wavelength that is long compared to the pitch of the twisted pair, alternate lengths of wires develop opposing voltages, tending to cancel the effect of the interference. Fire protection Electrical cable jacket material is usually constructed of flexible plastic which will burn. The fire hazard of grouped cables can be significant. Cables jacketing materials can be formulated to prevent fire spread . Alternately, fire spread amongst combustible cables can be prevented by the application of fire retardant coatings directly on the cable exterior, or the fire threat can be isolated by the installation of boxes constructed of noncombustible materials around the bulk cable installation. Types Coaxial cable – used for radio frequency signals, for example in cable television distribution systems. Direct-buried cable Flexible cables Filled cable Heliax cable Non-metallic sheathed cable (or nonmetallic building wire, NM, NM-B) Armored cable (or BX) Multicore cable (consist of more than one wire and is covered by cable jacket) Paired cable – Composed of two individually insulated conductors that are usually used in DC or low-frequency AC applications Portable cord – Flexible cable for AC power in portable applications Power cable – A cable used for transmission of power Ribbon cable – Useful when many wires are required. This type of cable can easily flex, and it is designed to handle low-level voltages. Shielded cable – Used for sensitive electronic circuits or to provide protection in high-voltage applications. Single cable (from time to time this name is used for wire) Structured cabling Submersible cable Twin and earth Twinax cable Twin-lead – This type of cable is a flat two-wire line. It is commonly called a 300 Ω line because the line has an impedance of 300 Ω. It is often used as a transmission line between an antenna and a receiver (e.g., TV and radio). These cables are stranded to lower skin effects. Twisted pair – Consists of two interwound insulated wires. It resembles a paired cable, except that the paired wires are twisted CENELEC HD 361 is a ratified standard published by CENELEC, which relates to wire and cable marking type, whose goal is to harmonize cables. Deutsches Institut für Normung (DIN, VDE) has released a similar standard (DIN VDE 0292).
Technology
Components_2
null
42445
https://en.wikipedia.org/wiki/Dalton%20%28unit%29
Dalton (unit)
The dalton or unified atomic mass unit (symbols: Da or u, respectively) is a unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. It is a non-SI unit accepted for use with SI. The atomic mass constant, denoted mu, is defined identically, giving . This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13. The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the unit kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total. The mole is a unit of amount of substance used in chemistry and physics, such that the mass of one mole of a substance expressed in grams is numerically equal to the average mass of one of its particles expressed in daltons. That is, the molar mass of a chemical compound expressed in g/mol or kg/kmol is numerically equal to its average molecular mass expressed in Da. For example, the average mass of one molecule of water is about 18.0153 Da, and the mass of one mole of water is about 18.0153 g. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for practical purposes, it is only approximate, because of the 2019 redefinition of the mole. In general, the mass in daltons of an atom is numerically close but not exactly equal to the number of nucleons in its nucleus. It follows that the molar mass of a compound (grams per mole) is numerically close to the average number of nucleons contained in each molecule. By definition, the mass of an atom of carbon-12 is 12 daltons, which corresponds with the number of nucleons that it has (6 protons and 6 neutrons). However, the mass of an atomic-scale object is affected by the binding energy of the nucleons in its atomic nuclei, as well as the mass and binding energy of its electrons. Therefore, this equality holds only for the carbon-12 atom in the stated conditions, and will vary for other substances. For example, the mass of an unbound atom of the common hydrogen isotope (hydrogen-1, protium) is , the mass of a proton is the mass of a free neutron is and the mass of a hydrogen-2 (deuterium) atom is . In general, the difference (absolute mass excess) is less than 0.1%; exceptions include hydrogen-1 (about 0.8%), helium-3 (0.5%), lithium-6 (0.25%) and beryllium (0.14%). The dalton differs from the unit of mass in the system of atomic units, which is the electron rest mass (m). Energy equivalents The atomic mass constant can also be expressed as its energy-equivalent, mc. The CODATA recommended values are: The mass-equivalent is commonly used in place of a unit of mass in particle physics, and these values are also important for the practical determination of relative atomic masses. History Origin of the concept The interpretation of the law of definite proportions in terms of the atomic theory of matter implied that the masses of atoms of various elements had definite ratios that depended on the elements. While the actual masses were unknown, the relative masses could be deduced from that law. In 1803 John Dalton proposed to use the (still unknown) atomic mass of the lightest atom, hydrogen, as the natural unit of atomic mass. This was the basis of the atomic weight scale. For technical reasons, in 1898, chemist Wilhelm Ostwald and others proposed to redefine the unit of atomic mass as the mass of an oxygen atom. That proposal was formally adopted by the International Committee on Atomic Weights (ICAW) in 1903. That was approximately the mass of one hydrogen atom, but oxygen was more amenable to experimental determination. This suggestion was made before the discovery of isotopes in 1912. Physicist Jean Perrin had adopted the same definition in 1909 during his experiments to determine the atomic masses and the Avogadro constant. This definition remained unchanged until 1961. Perrin also defined the "mole" as an amount of a compound that contained as many molecules as 32 grams of oxygen (). He called that number the Avogadro number in honor of physicist Amedeo Avogadro. Isotopic variation The discovery of isotopes of oxygen in 1929 required a more precise definition of the unit. Two distinct definitions came into use. Chemists choose to define the AMU as of the average mass of an oxygen atom as found in nature; that is, the average of the masses of the known isotopes, weighted by their natural abundance. Physicists, on the other hand, defined it as of the mass of an atom of the isotope oxygen-16 (16O). Definition by IUPAC The existence of two distinct units with the same name was confusing, and the difference (about in relative terms) was large enough to affect high-precision measurements. Moreover, it was discovered that the isotopes of oxygen had different natural abundances in water and in air. For these and other reasons, in 1961 the International Union of Pure and Applied Chemistry (IUPAC), which had absorbed the ICAW, adopted a new definition of the atomic mass unit for use in both physics and chemistry; namely, of the mass of a carbon-12 atom. This new value was intermediate between the two earlier definitions, but closer to the one used by chemists (who would be affected the most by the change). The new unit was named the "unified atomic mass unit" and given a new symbol "u", to replace the old "amu" that had been used for the oxygen-based unit. However, the old symbol "amu" has sometimes been used, after 1961, to refer to the new unit, particularly in lay and preparatory contexts. With this new definition, the standard atomic weight of carbon is about , and that of oxygen is about . These values, generally used in chemistry, are based on averages of many samples from Earth's crust, its atmosphere, and organic materials. Adoption by BIPM The IUPAC 1961 definition of the unified atomic mass unit, with that name and symbol "u", was adopted by the International Bureau for Weights and Measures (BIPM) in 1971 as a non-SI unit accepted for use with the SI. Unit name In 1993, the IUPAC proposed the shorter name "dalton" (with symbol "Da") for the unified atomic mass unit. As with other unit names such as watt and newton, "dalton" is not capitalized in English, but its symbol, "Da", is capitalized. The name was endorsed by the International Union of Pure and Applied Physics (IUPAP) in 2005. In 2003 the name was recommended to the BIPM by the Consultative Committee for Units, part of the CIPM, as it "is shorter and works better with [SI] prefixes". In 2006, the BIPM included the dalton in its 8th edition of the SI brochure of formal definitions as a non-SI unit accepted for use with the SI. The name was also listed as an alternative to "unified atomic mass unit" by the International Organization for Standardization in 2009. It is now recommended by several scientific publishers, and some of them consider "atomic mass unit" and "amu" deprecated. In 2019, the BIPM retained the dalton in its 9th edition of the SI brochure, while dropping the unified atomic mass unit from its table of non-SI units accepted for use with the SI, but secondarily notes that the dalton (Da) and the unified atomic mass unit (u) are alternative names (and symbols) for the same unit. 2019 revision of the SI The definition of the dalton was not affected by the 2019 revision of the SI, that is, 1 Da in the SI is still of the mass of a carbon-12 atom, a quantity that must be determined experimentally in terms of SI units. However, the definition of a mole was changed to be the amount of substance consisting of exactly entities and the definition of the kilogram was changed as well. As a consequence, the molar mass constant remains close to but no longer exactly 1 g/mol, meaning that the mass in grams of one mole of any substance remains nearly but no longer exactly numerically equal to its average molecular mass in daltons, although the relative standard uncertainty of at the time of the redefinition is insignificant for all practical purposes. Measurement Though relative atomic masses are defined for neutral atoms, they are measured (by mass spectrometry) for ions: hence, the measured values must be corrected for the mass of the electrons that were removed to form the ions, and also for the mass equivalent of the electron binding energy, E/mc. The total binding energy of the six electrons in a carbon-12 atom is  = : Eb/muc2 = , or about one part in 10 million of the mass of the atom. Before the 2019 revision of the SI, experiments were aimed to determine the value of the Avogadro constant for finding the value of the unified atomic mass unit. Josef Loschmidt A reasonably accurate value of the atomic mass unit was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. Jean Perrin Perrin estimated the Avogadro number by a variety of methods, at the turn of the 20th century. He was awarded the 1926 Nobel Prize in Physics, largely for this work. Coulometry The electric charge per mole of elementary charges is a constant called the Faraday constant, F, whose value had been essentially known since 1834 when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan obtained the first measurement of the charge on an electron, −e. The quotient F/e provided an estimate of the Avogadro constant. The classic experiment is that of Bower and Davis at NIST, and relies on dissolving silver metal away from the anode of an electrolysis cell, while passing a constant electric current I for a known time t. If m is the mass of silver lost from the anode and A the atomic weight of silver, then the Faraday constant is given by: The NIST scientists devised a method to compensate for silver lost from the anode by mechanical causes, and conducted an isotope analysis of the silver used to determine its atomic weight. Their value for the conventional Faraday constant was F = , which corresponds to a value for the Avogadro constant of : both values have a relative standard uncertainty of . Electron mass measurement In practice, the atomic mass constant is determined from the electron rest mass m and the electron relative atomic mass A(e) (that is, the mass of electron divided by the atomic mass constant). The relative atomic mass of the electron can be measured in cyclotron experiments, while the rest mass of the electron can be derived from other physical constants. where c is the speed of light, h is the Planck constant, α is the fine-structure constant, and R is the Rydberg constant. As may be observed from the old values (2014 CODATA) in the table below, the main limiting factor in the precision of the Avogadro constant was the uncertainty in the value of the Planck constant, as all the other constants that contribute to the calculation were known more precisely. The power of having defined values of universal constants as is presently the case can be understood from the table below (2018 CODATA). X-ray crystal density methods Silicon single crystals may be produced today in commercial facilities with extremely high purity and with few lattice defects. This method defined the Avogadro constant as the ratio of the molar volume, V, to the atomic volume V: where and n is the number of atoms per unit cell of volume Vcell. The unit cell of silicon has a cubic packing arrangement of 8 atoms, and the unit cell volume may be measured by determining a single unit cell parameter, the length a of one of the sides of the cube. The CODATA value of a for silicon is In practice, measurements are carried out on a distance known as d(Si), which is the distance between the planes denoted by the Miller indices {220}, and is equal to . The isotope proportional composition of the sample used must be measured and taken into account. Silicon occurs in three stable isotopes (Si, Si, Si), and the natural variation in their proportions is greater than other uncertainties in the measurements. The atomic weight A for the sample crystal can be calculated, as the standard atomic weights of the three nuclides are known with great accuracy. This, together with the measured density ρ of the sample, allows the molar volume V to be determined: where M is the molar mass constant. The CODATA value for the molar volume of silicon is , with a relative standard uncertainty of
Physical sciences
Mass
null
42563
https://en.wikipedia.org/wiki/Polyp%20%28zoology%29
Polyp (zoology)
A polyp in zoology is one of two forms found in the phylum Cnidaria, the other being the medusa. Polyps are roughly cylindrical in shape and elongated at the axis of the vase-shaped body. In solitary polyps, the aboral (opposite to oral) end is attached to the substrate by means of a disc-like holdfast called a pedal disc, while in colonies of polyps it is connected to other polyps, either directly or indirectly. The oral end contains the mouth, and is surrounded by a circlet of tentacles. Classes In the class Anthozoa, comprising the sea anemones and corals, the individual is always a polyp; in the class Hydrozoa, however, the individual may be either a polyp or a medusa, with most species undergoing a life cycle with both a polyp stage and a medusa stage. In the class Scyphozoa, the medusa stage is dominant, and the polyp stage may or may not be present, depending on the family. In those scyphozoans that have the larval planula metamorphose into a polyp, the polyp, also called a "scyphistoma," grows until it develops a stack of plate-like medusae that pinch off and swim away in a process known as strobilation. Once strobilation is complete, the polyp may die, or regenerate itself to repeat the process again later. With cubozoans, the planula settles onto a suitable surface, and develops into a polyp. The cubozoan polyp then eventually metamorphoses directly into a medusa. Anatomy The body of the polyp may be roughly compared in a structure to a sac, the wall of which is composed of two layers of cells. The outer layer is known technically as the ectoderm, with the inner layer as the endoderm (or gastroderm). Between ectoderm and endoderm is a supporting layer of structureless gelatinous substance termed mesoglea, secreted by the cell layers of the body wall. The mesoglea can be thinner than the endoderm or ectoderm or comprise the bulk of the body as in larger jellyfish. The mesoglea can contain skeletal elements derived from cells migrated from ectoderm. The sac-like body built up in this way is attached usually to some firm object by its blind end, and bears at the upper end the mouth which is surrounded by a circle of tentacles which resemble glove fingers. The tentacles are organs which serve both for the tactile sense and for the capture of food. Polyps extend their tentacles, particularly at night, containing coiled stinging nettle-like cells, or nematocysts, which pierce, poison, and firmly hold living prey paralysing or killing them. Polyp prey includes copepods and fish larvae. Longitudinal muscular fibrils formed from the cells of the ectoderm allow tentacles to contract when conveying the food to the mouth. Similarly, circularly disposed muscular fibrils formed from the endoderm permit tentacles to be protract or thrust out once they are contracted. These muscle fibres belong to the same two systems, allowing the whole body to retract or protrude outwards. We can distinguish therefore in the body of a polyp the column, circular or oval in section, forming the trunk, resting on a base or foot and surmounted by the crown of tentacles, which enclose an area termed the peristome, in the centre of which again is the mouth. Generally, there is no other opening to the body except the mouth, but in some cases excretory pores are known to occur in the foot, and pores may occur at the tips of the tentacles. A polyp is an animal of very simple structure, a living fossil that has not changed significantly for about half a billion years (per generally accepted dating of Cambrian sedimentary rock). The external form of the polyp varies greatly in different cases. The column may be long and slender, or may be so short in the vertical direction that the body becomes disk-like. The tentacles may number many hundreds or may be very few, in rare cases only one or two. They may be long and filamentous, or short and reduced to mere knobs or warts. They may be simple and unbranched, or they may be feathery in pattern. The mouth may be level with the surface of the peristome, or may be projecting and trumpet-shaped. As regards internal structure, polyps exhibit two well-marked types of organization, each characteristic of one of the two classes, Hydrozoa and Anthozoa. In the class Hydrozoa, the polyps are indeed often very simple, like the common little fresh water species of the genus Hydra. Anthozoan polyps, including the corals and sea anemones, are much more complex due to the development of a tubular stomodaeum leading inward from the mouth and a series of radial partitions called mesenteries. Many of the mesenteries project into the enteric cavity but some extend from the body wall to the central stomodaeum. Reproduction It is an almost universal attribute of polyps to reproduce asexually by the method of budding. This mode of reproduction may be combined with sexual reproduction, or may be the sole method by which the polyp produces offspring, in which case the polyp is entirely without sexual organs. Asexual reproduction In many cases the buds formed do not separate from the parent but remain in continuity with it, thus forming colonies or stocks, which may reach a great size and contain a vast number of individuals. Slight differences in the method of budding produce great variations in the form of the colonies. The reef-building corals are polyp-colonies, strengthened by the formation of a firm skeleton. Sexual reproduction Among sea anemones, sexual plasticity may occur. That is, asexually produced clones derived from a single founder individual can contain both male and female individuals (ramets). When eggs and sperm (gametes) are formed, they can produce zygotes derived from "selfing" (within the founding clone) or out-crossing, that then develop into swimming planula larvae. The overwhelming majority of stony coral (Scleractinia) taxa are hermaphroditic in their adult colonies. In these species, there is ordinarily synchronized release of eggs and sperm into the water during brief spawning events. Although some species are capable of self-fertilization to varying extents, cross-fertilization appears to be the dominant mating pattern. Etymology The name polyp was given by René Antoine Ferchault de Réaumur to these organisms from their superficial resemblance to an octopus (, ultimately from Ancient Greek adverb (, "much") + noun (, "foot")), with its circle of writhing arms round the mouth. This comparison contrasts to the common name "coral-insects", applied to the polyps which form coral. Threats 75% of the world's corals are threatened due to overfishing, destructive fishing, coastal development, pollution, thermal stress, ocean acidification, crown-of-thorns starfish, and introduced invasive species. In recent decades the conditions that corals and polyps have found themselves in have been changing, leading to new diseases being observed in corals in many parts of the world, posing even greater risk to an already pressured animal. Aquatic life has been put under a substantial amount of stress because of the pollutants caused by land-based agriculture. Particularly, exposure to the insecticide profenofos and the fungicide MEMC have played a major part in polyp retraction and biomass decrease. There have been many experiments supporting the hypothesis that heat stress in Acropora tenuis juvenile polyps provokes an up-regulation of protein in the endoplasmic reticulum. The results vary based on the polyp characteristics such as age, type, and growth stage.
Biology and health sciences
Cnidarians
Animals
42567
https://en.wikipedia.org/wiki/Spleen
Spleen
The spleen (, from Ancient Greek σπλήν, splḗn) is an organ found in almost all vertebrates. Similar in structure to a large lymph node, it acts primarily as a blood filter. The spleen plays important roles in regard to red blood cells (erythrocytes) and the immune system. It removes old red blood cells and holds a reserve of blood, which can be valuable in case of hemorrhagic shock, and also recycles iron. As a part of the mononuclear phagocyte system, it metabolizes hemoglobin removed from senescent red blood cells. The globin portion of hemoglobin is degraded to its constitutive amino acids, and the heme portion is metabolized to bilirubin, which is removed in the liver. The spleen houses antibody-producing lymphocytes in its white pulp and monocytes which remove antibody-coated bacteria and antibody-coated blood cells by way of blood and lymph node circulation. These monocytes, upon moving to injured tissue (such as the heart after myocardial infarction), turn into dendritic cells and macrophages while promoting tissue healing. The spleen is a center of activity of the mononuclear phagocyte system and is analogous to a large lymph node, as its absence causes a predisposition to certain infections. In humans, the spleen is purple in color and is in the left upper quadrant of the abdomen. The surgical process to remove the spleen is known as a splenectomy. Structure In humans, the spleen is underneath the left part of the diaphragm, and has a smooth, convex surface that faces the diaphragm. It is underneath the ninth, tenth, and eleventh ribs. The other side of the spleen is divided by a ridge into two regions: an anterior gastric portion, and a posterior renal portion. The gastric surface is directed forward, upward, and toward the middle, is broad and concave, and is in contact with the posterior wall of the stomach. Below this it is in contact with the tail of the pancreas. The renal surface is directed medialward and downward. It is somewhat flattened, considerably narrower than the gastric surface, and is in relation with the upper part of the anterior surface of the left kidney and occasionally with the left adrenal gland. There are four ligaments attached to the spleen: gastrosplenic ligament, splenorenal ligament, colicosplenic ligament, and phrenocolic ligament. Measurements The spleen, in healthy adult humans, is approximately in length. An easy way to remember the anatomy of the spleen is the 1×3×5×7×9×10×11 rule. The spleen is , weighs approximately , and lies between the ninth and eleventh ribs on the left-hand side and along the axis of the tenth rib. The weight varies between and (standard reference range), correlating mainly to height, body weight and degree of acute congestion but not to sex or age. Blood supply Near the middle of the spleen is a long fissure, the hilum, which is the point of attachment for the gastrosplenic ligament and the point of insertion for the splenic artery and splenic vein. There are other openings present for lymphatic vessels and nerves. In addition to the splenic artery, collateral blood supply is provided by the adjacent short gastric arteries. Like the thymus, the spleen possesses only efferent lymphatic vessels. The spleen is part of the lymphatic system. Both the short gastric arteries and the splenic artery supply it with blood. The germinal centers are supplied by arterioles called penicilliary radicles. Nerve supply The spleen is innervated by the splenic plexus, which connects a branch of the celiac ganglia to the vagus nerve. The underlying central nervous processes coordinating the spleen's function seem to be embedded into the hypothalamic-pituitary-adrenal-axis, and the brainstem, especially the subfornical organ. Development The spleen is unique in respect to its development within the gut. While most of the gut organs are endodermally derived, the spleen is derived from mesenchymal tissue. Specifically, the spleen forms within, and from, the dorsal mesentery. However, it still shares the same blood supply—the celiac trunk—as the foregut organs. Function Pulp Other Other functions of the spleen are less prominent, especially in the healthy adult: Spleen produces all types of blood cells during fetal life Production of opsonins, properdin, and tuftsin. Release of neutrophils following myocardial infarction. Creation of red blood cells. While the bone marrow is the primary site of hematopoiesis in the adult, the spleen has important hematopoietic functions up until the fifth month of gestation. After birth, erythropoietic functions cease, except in some hematologic disorders. As a major lymphoid organ and a central player in the reticuloendothelial system, the spleen retains the ability to produce lymphocytes and, as such, remains a hematopoietic organ. Storage of red blood cells, lymphocytes and other formed elements. The spleen of horses stores roughly 30 percent of the red blood cells and can release them when needed. In humans, up to a cup (240 ml) of red blood cells is held within the spleen and released in cases of hypovolemia and hypoxia. It can store platelets in case of an emergency and also clears old platelets from the circulation. Up to a quarter of lymphocytes are stored in the spleen at any one time. Clinical significance Enlarged spleen Enlargement of the spleen is known as splenomegaly. It may be caused by sickle cell anemia, sarcoidosis, malaria, bacterial endocarditis, leukemia, polycythemia vera, pernicious anemia, Gaucher's disease, leishmaniasis, Hodgkin's disease, Banti's disease, hereditary spherocytosis, cysts, glandular fever (including mononucleosis or 'Mono' caused by the Epstein–Barr virus and infection from cytomegalovirus), and tumours. Primary tumors of the spleen include hemangiomas and hemangiosarcomas. Marked splenomegaly may result in the spleen occupying a large portion of the left side of the abdomen. The spleen is the largest collection of lymphoid tissue in the body. It is normally palpable in preterm infants, in 30% of normal, full-term neonates, and in 5% to 10% of infants and toddlers. A spleen easily palpable below the costal margin in any child over the age of three to four years should be considered abnormal until proven otherwise. Splenomegaly can result from antigenic stimulation (e.g., infection), obstruction of blood flow (e.g., portal vein obstruction), underlying functional abnormality (e.g., hemolytic anemia), or infiltration (e.g., leukemia or storage disease, such as Gaucher's disease). The most common cause of acute splenomegaly in children is viral infection, which is transient and usually moderate. Basic work-up for acute splenomegaly includes a complete blood count with differential, platelet count, and reticulocyte and atypical lymphocyte counts to exclude hemolytic anemia and leukemia. Assessment of IgM antibodies to viral capsid antigen (a rising titer) is indicated to confirm Epstein–Barr virus or cytomegalovirus. Other infections should be excluded if these tests are negative. Calculators have been developed for measurements of spleen size based on CT, US, and MRI findings. Splenic injury Trauma, such as a road traffic collision, can cause rupture of the spleen, which is a situation requiring immediate medical attention. Asplenia Asplenia refers to a non-functioning spleen, which may be congenital, or caused by traumatic injury, surgical resection (splenectomy) or a disease such as sickle cell anaemia. Hyposplenia refers to a partially functioning spleen. These conditions may cause a modest increase in circulating white blood cells and platelets, a diminished response to some vaccines, and an increased susceptibility to infection. In particular, there is an increased risk of sepsis from polysaccharide encapsulated bacteria. Encapsulated bacteria inhibit binding of complement or prevent complement assembled on the capsule from interacting with macrophage receptors. Phagocytosis needs natural antibodies, which are immunoglobulins that facilitate phagocytosis either directly or by complement deposition on the capsule. They are produced by IgM memory B cells (a subtype of B cells) in the marginal zone of the spleen. A splenectomy (removal of the spleen) results in a greatly diminished frequency of memory B cells. A 28-year follow-up of 740 World War II veterans whose spleens were removed on the battlefield showed a significant increase in the usual death rate from pneumonia (6 rather than the expected 1.3) and an increase in the death rate from ischemic heart disease (41 rather than the expected 30), but not from other conditions. Accessory spleen An accessory spleen is a small splenic nodule extra to the spleen usually formed in early embryogenesis. Accessory spleens are found in approximately 10 percent of the population and are typically around 1 centimeter in diameter. Splenosis is a condition where displaced pieces of splenic tissue (often following trauma or splenectomy) autotransplant in the abdominal cavity as accessory spleens. Polysplenia is a congenital disease manifested by multiple small accessory spleens, rather than a single, full-sized, normal spleen. Polysplenia sometimes occurs alone, but it is often accompanied by other developmental abnormalities such as intestinal malrotation or biliary atresia, or cardiac abnormalities, such as dextrocardia. These accessory spleens are non-functional. Infarction Splenic infarction is a condition in which blood flow supply to the spleen is compromised, leading to partial or complete infarction (tissue death due to oxygen shortage) in the organ. Splenic infarction occurs when the splenic artery or one of its branches are occluded, for example by a blood clot. Although it can occur asymptomatically, the typical symptom is severe pain in the left upper quadrant of the abdomen, sometimes radiating to the left shoulder. Fever and chills develop in some cases. It has to be differentiated from other causes of acute abdomen. Hyaloserositis The spleen may be affected by hyaloserositis, in which it is coated with fibrous hyaline. Society and culture There has been a long and varied history of misconceptions regarding the physiological role of the spleen, and it has often been seen as a reservoir for juices closely linked to digestion. In various cultures, the organ has been linked to melancholia, due to the influence of ancient Greek medicine and the associated doctrine of humourism, in which the spleen was believed to be a reservoir for an elusive fluid known as "black bile" (one of the four humours). The spleen also plays an important role in traditional Chinese medicine, where it is considered to be a key organ that displays the Yin aspect of the Earth element (its Yang counterpart is the stomach). In contrast, the Talmud (tractate Berachoth 61b) refers to the spleen as the organ of laughter while possibly suggesting a link with the humoral view of the organ. Etymologically, spleen comes from the Ancient Greek (splḗn), where it was the idiomatic equivalent of the heart in modern English. Persius, in his satires, associated spleen with immoderate laughter. The native Old English word for it is , now primarily used for animals; a loanword from Latin is . In English, William Shakespeare frequently used the word spleen to signify melancholy, but also caprice and merriment. In Julius Caesar, he uses the spleen to describe Cassius's irritable nature: Must I observe you? must I stand and crouch Under your testy humour? By the gods You shall digest the venom of your spleen, Though it do split you; for, from this day forth, I'll use you for my mirth, yea, for my laughter, When you are waspish. The spleen, as a byword for melancholy, has also been considered an actual disease. In the early 18th century, the physician Richard Blackmore considered it to be one of the two most prevalent diseases in England (along with consumption). In 1701, Anne Finch (later, Countess of Winchilsea) had published a Pindaric ode, The Spleen, drawing on her first-hand experiences of an affliction which, at the time, also had a reputation of being a fashionably upper-class disease of the English. Both Blackmore and George Cheyne treated this malady as the male equivalent of "the vapours", while preferring the more learned terms "hypochondriasis" and "hysteria". In the late 18th century, the German word Spleen came to denote eccentric and hypochondriac tendencies that were thought to be characteristic of English people. In French, "splénétique" refers to a state of pensive sadness or melancholy. This usage was popularised by the poems of Charles Baudelaire (1821–1867) and his collection Le Spleen de Paris, but it was also present in earlier 19th-century Romantic literature. Food The spleen is one of the many organs that may be included in offal. It is not widely eaten as a principal ingredient, but cow spleen sandwiches are eaten in Sicilian cuisine. Chicken spleen is one of the main ingredients of Jerusalem mixed grill. Other animals In cartilaginous and ray-finned fish, the spleen consists primarily of red pulp and is normally somewhat elongated, as it lies inside the serosal lining of the intestine. In many amphibians, especially frogs, it has the more rounded form and there is often a greater quantity of white pulp. In reptiles, birds, and mammals, white pulp is always relatively plentiful, and in birds and some mammals the spleen is typically rounded, but it adjusts its shape somewhat to the arrangement of the surrounding organs. In most vertebrates, the spleen continues to produce red blood cells throughout life; only in mammals this function is lost in middle-aged adults. Many mammals have tiny spleen-like structures known as haemal nodes throughout the body that are presumed to have the same function as the spleen. The spleens of aquatic mammals differ in some ways from those of fully land-dwelling mammals; in general they are bluish in colour. In cetaceans and manatees, they tend to be quite small, but in deep diving pinnipeds, they can be massive, due to their function of storing red blood cells. Marsupials have y-shaped spleens, and it develops postnatally. The only vertebrates lacking a spleen are the lampreys and hagfishes (the early-branching Cyclostomata, or jawless fishes). Even in these animals, there is a diffuse layer of haematopoeitic tissue within the gut wall, which has a similar structure to red pulp and is presumed homologous with the spleen of higher vertebrates. In mice, the spleen stores half the body's monocytes so that, upon injury, they can migrate to the injured tissue and transform into dendritic cells and macrophages to assist wound healing. Additional images
Biology and health sciences
Circulatory system
null
42693
https://en.wikipedia.org/wiki/Upper%20and%20lower%20bounds
Upper and lower bounds
In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is every element of . Dually, a lower bound or minorant of is defined to be an element of that is less than or equal to every element of . A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound. The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds. Examples For example, is a lower bound for the set (as a subset of the integers or of the real numbers, etc.), and so is . On the other hand, is not a lower bound for since it is not smaller than every element in . and other numbers x such that would be an upper bound for S. The set has as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that . Every subset of the natural numbers has a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of the rational numbers may or may not be bounded from below, and may or may not be bounded from above. Every finite subset of a non-empty totally ordered set has both upper and lower bounds. Bounds of functions The definitions can be generalized to functions and even to sets of functions. Given a function with domain and a preordered set as codomain, an element of is an upper bound of if for each in . The upper bound is called sharp if equality holds for at least one value of . It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality. Similarly, a function defined on domain and having the same codomain is an upper bound of , if for each in . The function is further said to be an upper bound of a set of functions, if it is an upper bound of each function in that set. The notion of lower bound for (sets of) functions is defined analogously, by replacing ≥ with ≤. Tight bounds An upper bound is said to be a tight upper bound, a least upper bound, or a supremum, if no smaller value is an upper bound. Similarly, a lower bound is said to be a tight lower bound, a greatest lower bound, or an infimum, if no greater value is a lower bound. Exact upper bounds An upper bound of a subset of a preordered set is said to be an exact upper bound for if every element of that is strictly majorized by is also majorized by some element of . Exact upper bounds of reduced products of linear orders play an important role in PCF theory.
Mathematics
Order theory
null
42709
https://en.wikipedia.org/wiki/Pendulum
Pendulum
A pendulum is a device made of a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum and also to a slight degree on the amplitude, the width of the pendulum's swing. The regular motion of pendulums was used for timekeeping and was the world's most accurate timekeeping technology until the 1930s. The pendulum clock invented by Christiaan Huygens in 1656 became the world's standard timekeeper, used in homes and offices for 270 years, and achieved accuracy of about one second per year before it was superseded as a time standard by the quartz clock in the 1930s. Pendulums are also used in scientific instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geo-physical surveys, and even as a standard of length. The word pendulum is Neo-Latin, from the Latin , meaning . Mechanics Simple gravity pendulum The simple gravity pendulum is an idealized mathematical model of a pendulum. This is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. When given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines. Period of oscillation The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, θ0, called the amplitude. It is independent of the mass of the bob. If the amplitude is limited to small swings, the period of a simple pendulum, the time taken for a complete cycle, is: where is the length of the pendulum and is the local acceleration of gravity. For small swings the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time. For larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation (1). For example, at an amplitude of θ0 = 0.4 radians (23°) it is 1% larger than given by (1). The period increases asymptotically (to infinity) as θ0 approaches π radians (180°), because the value θ0 = π is an unstable equilibrium point for the pendulum. The true period of an ideal simple gravity pendulum can be written in several different forms (see pendulum (mechanics)), one example being the infinite series: where is in radians. The difference between this true period and the period for small swings (1) above is called the circular error. In the case of a typical grandfather clock whose pendulum has a swing of 6° and thus an amplitude of 3° (0.05 radians), the difference between the true period and the small angle approximation (1) amounts to about 15 seconds per day. For small swings the pendulum approximates a harmonic oscillator, and its motion as a function of time, t, is approximately simple harmonic motion: where is a constant value, dependent on initial conditions. For real pendulums, the period varies slightly with factors such as the buoyancy and viscous resistance of the air, the mass of the string or rod, the size and shape of the bob and how it is attached to the string, and flexibility and stretching of the string. In precision applications, corrections for these factors may need to be applied to eq. (1) to give the period accurately. A damped, driven pendulum is a chaotic system. Compound pendulum Any swinging rigid body free to rotate about a fixed horizontal axis is called a compound pendulum or physical pendulum. A compound pendulum has the same period as a simple gravity pendulum of length , called the equivalent length or radius of oscillation, equal to the distance from the pivot to a point called the center of oscillation. This point is located under the center of mass of the pendulum, at a distance which depends on the mass distribution of the pendulum. If most of the mass is concentrated in a relatively small bob compared to the pendulum length, the center of oscillation is close to the center of mass. The radius of oscillation or equivalent length of any physical pendulum can be shown to be where is the moment of inertia of the pendulum about the pivot point , is the total mass of the pendulum, and is the distance between the pivot point and the center of mass. Substituting this expression in (1) above, the period of a compound pendulum is given by for sufficiently small oscillations. For example, a rigid uniform rod of length pivoted about one end has moment of inertia . The center of mass is located at the center of the rod, so Substituting these values into the above equation gives . This shows that a rigid rod pendulum has the same period as a simple pendulum of two-thirds its length. Christiaan Huygens proved in 1673 that the pivot point and the center of oscillation are interchangeable. This means if any pendulum is turned upside down and swung from a pivot located at its previous center of oscillation, it will have the same period as before and the new center of oscillation will be at the old pivot point. In 1817 Henry Kater used this idea to produce a type of reversible pendulum, now known as a Kater pendulum, for improved measurements of the acceleration due to gravity. Double pendulum In physics and mathematics, in the area of dynamical systems, a double pendulum, also known as a chaotic pendulum, is a pendulum with another pendulum attached to its end, forming a simple physical system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions. The motion of a double pendulum is governed by a set of coupled ordinary differential equations and is chaotic. History One of the earliest known uses of a pendulum was a 1st-century seismometer device of Han dynasty Chinese scientist Zhang Heng. Its function was to sway and activate one of a series of levers after being disturbed by the tremor of an earthquake far away. Released by a lever, a small ball would fall out of the urn-shaped device into one of eight metal toads' mouths below, at the eight points of the compass, signifying the direction the earthquake was located. Many sources claim that the 10th-century Egyptian astronomer Ibn Yunus used a pendulum for time measurement, but this was an error that originated in 1684 with the British historian Edward Bernard. During the Renaissance, large hand-pumped pendulums were used as sources of power for manual reciprocating machines such as saws, bellows, and pumps. 1602: Galileo's research Italian scientist Galileo Galilei was the first to study the properties of pendulums, beginning around 1602. The first recorded interest in pendulums made by Galileo was around 1588 in his posthumously published notes titled On Motion, in which he noted that heavier objects would continue to oscillate for a greater amount of time than lighter objects. The earliest extant report of his experimental research is contained in a letter to Guido Ubaldo dal Monte, from Padua, dated November 29, 1602. His biographer and student, Vincenzo Viviani, claimed his interest had been sparked around 1582 by the swinging motion of a chandelier in Pisa Cathedral. Galileo discovered the crucial property that makes pendulums useful as timekeepers, called isochronism; the period of the pendulum is approximately independent of the amplitude or width of the swing. He also found that the period is independent of the mass of the bob, and proportional to the square root of the length of the pendulum. He first employed freeswinging pendulums in simple timing applications. Santorio Santori in 1602 invented a device which measured a patient's pulse by the length of a pendulum; the pulsilogium. In 1641 Galileo dictated to his son Vincenzo a design for a mechanism to keep a pendulum swinging, which has been described as the first pendulum clock; Vincenzo began construction, but had not completed it when he died in 1649. 1656: The pendulum clock In 1656 the Dutch scientist Christiaan Huygens built the first pendulum clock. This was a great improvement over existing mechanical clocks; their best accuracy was improved from around 15 minutes deviation a day to around 15 seconds a day. Pendulums spread over Europe as existing clocks were retrofitted with them. The English scientist Robert Hooke studied the conical pendulum around 1666, consisting of a pendulum that is free to swing in two dimensions, with the bob rotating in a circle or ellipse. He used the motions of this device as a model to analyze the orbital motions of the planets. Hooke suggested to Isaac Newton in 1679 that the components of orbital motion consisted of inertial motion along a tangent direction plus an attractive motion in the radial direction. This played a part in Newton's formulation of the law of universal gravitation. Robert Hooke was also responsible for suggesting as early as 1666 that the pendulum could be used to measure the force of gravity. During his expedition to Cayenne, French Guiana in 1671, Jean Richer found that a pendulum clock was minutes per day slower at Cayenne than at Paris. From this he deduced that the force of gravity was lower at Cayenne. In 1687, Isaac Newton in Principia Mathematica showed that this was because the Earth was not a true sphere but slightly oblate (flattened at the poles) from the effect of centrifugal force due to its rotation, causing gravity to increase with latitude. Portable pendulums began to be taken on voyages to distant lands, as precision gravimeters to measure the acceleration of gravity at different points on Earth, eventually resulting in accurate models of the shape of the Earth. 1673: Huygens' Horologium Oscillatorium In 1673, 17 years after he invented the pendulum clock, Christiaan Huygens published his theory of the pendulum, Horologium Oscillatorium sive de motu pendulorum. Marin Mersenne and René Descartes had discovered around 1636 that the pendulum was not quite isochronous; its period increased somewhat with its amplitude. Huygens analyzed this problem by determining what curve an object must follow to descend by gravity to the same point in the same time interval, regardless of starting point; the so-called tautochrone curve. By a complicated method that was an early use of calculus, he showed this curve was a cycloid, rather than the circular arc of a pendulum, confirming that the pendulum was not isochronous and Galileo's observation of isochronism was accurate only for small swings. Huygens also solved the problem of how to calculate the period of an arbitrarily shaped pendulum (called a compound pendulum), discovering the center of oscillation, and its interchangeability with the pivot point. The existing clock movement, the verge escapement, made pendulums swing in very wide arcs of about 100°. Huygens showed this was a source of inaccuracy, causing the period to vary with amplitude changes caused by small unavoidable variations in the clock's drive force. To make its period isochronous, Huygens mounted cycloidal-shaped metal guides next to the pivots in his clocks, that constrained the suspension cord and forced the pendulum to follow a cycloid arc (see cycloidal pendulum). This solution didn't prove as practical as simply limiting the pendulum's swing to small angles of a few degrees. The realization that only small swings were isochronous motivated the development of the anchor escapement around 1670, which reduced the pendulum swing in clocks to 4°–6°. This became the standard escapement used in pendulum clocks. 1721: Temperature compensated pendulums During the 18th and 19th century, the pendulum clock's role as the most accurate timekeeper motivated much practical research into improving pendulums. It was found that a major source of error was that the pendulum rod expanded and contracted with changes in ambient temperature, changing the period of swing. This was solved with the invention of temperature compensated pendulums, the mercury pendulum in 1721 and the gridiron pendulum in 1726, reducing errors in precision pendulum clocks to a few seconds per week. The accuracy of gravity measurements made with pendulums was limited by the difficulty of finding the location of their center of oscillation. Huygens had discovered in 1673 that a pendulum has the same period when hung from its center of oscillation as when hung from its pivot, and the distance between the two points was equal to the length of a simple gravity pendulum of the same period. In 1818 British Captain Henry Kater invented the reversible Kater's pendulum which used this principle, making possible very accurate measurements of gravity. For the next century the reversible pendulum was the standard method of measuring absolute gravitational acceleration. 1851: Foucault pendulum In 1851, Jean Bernard Léon Foucault showed that the plane of oscillation of a pendulum, like a gyroscope, tends to stay constant regardless of the motion of the pivot, and that this could be used to demonstrate the rotation of the Earth. He suspended a pendulum free to swing in two dimensions (later named the Foucault pendulum) from the dome of the Panthéon in Paris. The length of the cord was . Once the pendulum was set in motion, the plane of swing was observed to precess or rotate 360° clockwise in about 32 hours. This was the first demonstration of the Earth's rotation that did not depend on celestial observations, and a "pendulum mania" broke out, as Foucault pendulums were displayed in many cities and attracted large crowds. 1930: Decline in use Around 1900 low-thermal-expansion materials began to be used for pendulum rods in the highest precision clocks and other instruments, first invar, a nickel steel alloy, and later fused quartz, which made temperature compensation trivial. Precision pendulums were housed in low pressure tanks, which kept the air pressure constant to prevent changes in the period due to changes in buoyancy of the pendulum due to changing atmospheric pressure. The best pendulum clocks achieved accuracy of around a second per year. The timekeeping accuracy of the pendulum was exceeded by the quartz crystal oscillator, invented in 1921, and quartz clocks, invented in 1927, replaced pendulum clocks as the world's best timekeepers. Pendulum clocks were used as time standards until World War 2, although the French Time Service continued using them in their official time standard ensemble until 1954. Pendulum gravimeters were superseded by "free fall" gravimeters in the 1950s, but pendulum instruments continued to be used into the 1970s. Use for time measurement For 300 years, from its discovery around 1582 until development of the quartz clock in the 1930s, the pendulum was the world's standard for accurate timekeeping. In addition to clock pendulums, freeswinging seconds pendulums were widely used as precision timers in scientific experiments in the 17th and 18th centuries. Pendulums require great mechanical stability: a length change of only 0.02%, 0.2 mm in a grandfather clock pendulum, will cause an error of a minute per week. Clock pendulums Pendulums in clocks (see example at right) are usually made of a weight or bob (b) suspended by a rod of wood or metal (a). To reduce air resistance (which accounts for most of the energy loss in precision clocks) the bob is traditionally a smooth disk with a lens-shaped cross section, although in antique clocks it often had carvings or decorations specific to the type of clock. In quality clocks the bob is made as heavy as the suspension can support and the movement can drive, since this improves the regulation of the clock (see Accuracy below). A common weight for seconds pendulum bobs is . Instead of hanging from a pivot, clock pendulums are usually supported by a short straight spring (d) of flexible metal ribbon. This avoids the friction and 'play' caused by a pivot, and the slight bending force of the spring merely adds to the pendulum's restoring force. The highest precision clocks have pivots of 'knife' blades resting on agate plates. The impulses to keep the pendulum swinging are provided by an arm hanging behind the pendulum called the crutch, (e), which ends in a fork, (f) whose prongs embrace the pendulum rod. The crutch is pushed back and forth by the clock's escapement, (g,h). Each time the pendulum swings through its centre position, it releases one tooth of the escape wheel (g). The force of the clock's mainspring or a driving weight hanging from a pulley, transmitted through the clock's gear train, causes the wheel to turn, and a tooth presses against one of the pallets (h), giving the pendulum a short push. The clock's wheels, geared to the escape wheel, move forward a fixed amount with each pendulum swing, advancing the clock's hands at a steady rate. The pendulum always has a means of adjusting the period, usually by an adjustment nut (c) under the bob which moves it up or down on the rod. Moving the bob up decreases the pendulum's length, causing the pendulum to swing faster and the clock to gain time. Some precision clocks have a small auxiliary adjustment weight on a threaded shaft on the bob, to allow finer adjustment. Some tower clocks and precision clocks use a tray attached near to the midpoint of the pendulum rod, to which small weights can be added or removed. This effectively shifts the centre of oscillation and allows the rate to be adjusted without stopping the clock. The pendulum must be suspended from a rigid support. During operation, any elasticity will allow tiny imperceptible swaying motions of the support, which disturbs the clock's period, resulting in error. Pendulum clocks should be attached firmly to a sturdy wall. The most common pendulum length in quality clocks, which is always used in grandfather clocks, is the seconds pendulum, about long. In mantel clocks, half-second pendulums, long, or shorter, are used. Only a few large tower clocks use longer pendulums, the 1.5 second pendulum, long, or occasionally the two-second pendulum, which is used in Big Ben. Temperature compensation The largest source of error in early pendulums was slight changes in length due to thermal expansion and contraction of the pendulum rod with changes in ambient temperature. This was discovered when people noticed that pendulum clocks ran slower in summer, by as much as a minute per week (one of the first was Godefroy Wendelin, as reported by Huygens in 1658). Thermal expansion of pendulum rods was first studied by Jean Picard in 1669. A pendulum with a steel rod will expand by about 11.3 parts per million (ppm) with each degree Celsius increase, causing it to lose about 0.27 seconds per day for every degree Celsius increase in temperature, or 9 seconds per day for a change. Wood rods expand less, losing only about 6 seconds per day for a change, which is why quality clocks often had wooden pendulum rods. The wood had to be varnished to prevent water vapor from getting in, because changes in humidity also affected the length. Mercury pendulum The first device to compensate for this error was the mercury pendulum, invented by George Graham in 1721. The liquid metal mercury expands in volume with temperature. In a mercury pendulum, the pendulum's weight (bob) is a container of mercury. With a temperature rise, the pendulum rod gets longer, but the mercury also expands and its surface level rises slightly in the container, moving its centre of mass closer to the pendulum pivot. By using the correct height of mercury in the container these two effects will cancel, leaving the pendulum's centre of mass, and its period, unchanged with temperature. Its main disadvantage was that when the temperature changed, the rod would come to the new temperature quickly but the mass of mercury might take a day or two to reach the new temperature, causing the rate to deviate during that time. To improve thermal accommodation several thin containers were often used, made of metal. Mercury pendulums were the standard used in precision regulator clocks into the 20th century. Gridiron pendulum The most widely used compensated pendulum was the gridiron pendulum, invented in 1726 by John Harrison. This consists of alternating rods of two different metals, one with lower thermal expansion (CTE), steel, and one with higher thermal expansion, zinc or brass. The rods are connected by a frame, as shown in the drawing at the right, so that an increase in length of the zinc rods pushes the bob up, shortening the pendulum. With a temperature increase, the low expansion steel rods make the pendulum longer, while the high expansion zinc rods make it shorter. By making the rods of the correct lengths, the greater expansion of the zinc cancels out the expansion of the steel rods which have a greater combined length, and the pendulum stays the same length with temperature. Zinc-steel gridiron pendulums are made with 5 rods, but the thermal expansion of brass is closer to steel, so brass-steel gridirons usually require 9 rods. Gridiron pendulums adjust to temperature changes faster than mercury pendulums, but scientists found that friction of the rods sliding in their holes in the frame caused gridiron pendulums to adjust in a series of tiny jumps. In high precision clocks this caused the clock's rate to change suddenly with each jump. Later it was found that zinc is subject to creep. For these reasons mercury pendulums were used in the highest precision clocks, but gridirons were used in quality regulator clocks. Gridiron pendulums became so associated with good quality that, to this day, many ordinary clock pendulums have decorative 'fake' gridirons that don't actually have any temperature compensation function. Invar and fused quartz Around 1900, low thermal expansion materials were developed which could be used as pendulum rods in order to make elaborate temperature compensation unnecessary. These were only used in a few of the highest precision clocks before the pendulum became obsolete as a time standard. In 1896 Charles Édouard Guillaume invented the nickel steel alloy Invar. This has a CTE of around (), resulting in pendulum temperature errors over of only 1.3 seconds per day, and this residual error could be compensated to zero with a few centimeters of aluminium under the pendulum bob (this can be seen in the Riefler clock image above). Invar pendulums were first used in 1898 in the Riefler regulator clock which achieved accuracy of 15 milliseconds per day. Suspension springs of Elinvar were used to eliminate temperature variation of the spring's restoring force on the pendulum. Later fused quartz was used which had even lower CTE. These materials are the choice for modern high accuracy pendulums. Atmospheric pressure The effect of the surrounding air on a moving pendulum is complex and requires fluid mechanics to calculate precisely, but for most purposes its influence on the period can be accounted for by three effects: By Archimedes' principle the effective weight of the bob is reduced by the buoyancy of the air it displaces, while the mass (inertia) remains the same, reducing the pendulum's acceleration during its swing and increasing the period. This depends on the air pressure and the density of the pendulum, but not its shape. The pendulum carries an amount of air with it as it swings, and the mass of this air increases the inertia of the pendulum, again reducing the acceleration and increasing the period. This depends on both its density and shape. Viscous air resistance slows the pendulum's velocity. This has a negligible effect on the period, but dissipates energy, reducing the amplitude. This reduces the pendulum's Q factor, requiring a stronger drive force from the clock's mechanism to keep it moving, which causes increased disturbance to the period. Increases in barometric pressure increase a pendulum's period slightly due to the first two effects, by about . Researchers using pendulums to measure the acceleration of gravity had to correct the period for the air pressure at the altitude of measurement, computing the equivalent period of a pendulum swinging in vacuum. A pendulum clock was first operated in a constant-pressure tank by Friedrich Tiede in 1865 at the Berlin Observatory, and by 1900 the highest precision clocks were mounted in tanks that were kept at a constant pressure to eliminate changes in atmospheric pressure. Alternatively, in some a small aneroid barometer mechanism attached to the pendulum compensated for this effect. Gravity Pendulums are affected by changes in gravitational acceleration, which varies by as much as 0.5% at different locations on Earth, so precision pendulum clocks have to be recalibrated after a move. Even moving a pendulum clock to the top of a tall building can cause it to lose measurable time from the reduction in gravity. Accuracy of pendulums as timekeepers The timekeeping elements in all clocks, which include pendulums, balance wheels, the quartz crystals used in quartz watches, and even the vibrating atoms in atomic clocks, are in physics called harmonic oscillators. The reason harmonic oscillators are used in clocks is that they vibrate or oscillate at a specific resonant frequency or period and resist oscillating at other rates. However, the resonant frequency is not infinitely 'sharp'. Around the resonant frequency there is a narrow natural band of frequencies (or periods), called the resonance width or bandwidth, where the harmonic oscillator will oscillate. In a clock, the actual frequency of the pendulum may vary randomly within this resonance width in response to disturbances, but at frequencies outside this band, the clock will not function at all. The resonance width is determined by the damping, the frictional energy loss per swing of the pendulum. Q factor The measure of a harmonic oscillator's resistance to disturbances to its oscillation period is a dimensionless parameter called the Q factor equal to the resonant frequency divided by the resonance width. The higher the Q, the smaller the resonance width, and the more constant the frequency or period of the oscillator for a given disturbance. The reciprocal of the Q is roughly proportional to the limiting accuracy achievable by a harmonic oscillator as a time standard. The Q is related to how long it takes for the oscillations of an oscillator to die out. The Q of a pendulum can be measured by counting the number of oscillations it takes for the amplitude of the pendulum's swing to decay to 1/e = 36.8% of its initial swing, and multiplying by 'π. In a clock, the pendulum must receive pushes from the clock's movement to keep it swinging, to replace the energy the pendulum loses to friction. These pushes, applied by a mechanism called the escapement, are the main source of disturbance to the pendulum's motion. The Q is equal to 2π times the energy stored in the pendulum, divided by the energy lost to friction during each oscillation period, which is the same as the energy added by the escapement each period. It can be seen that the smaller the fraction of the pendulum's energy that is lost to friction, the less energy needs to be added, the less the disturbance from the escapement, the more 'independent' the pendulum is of the clock's mechanism, and the more constant its period is. The Q of a pendulum is given by: where M is the mass of the bob, is the pendulum's radian frequency of oscillation, and Γ is the frictional damping force on the pendulum per unit velocity.ω is fixed by the pendulum's period, and M is limited by the load capacity and rigidity of the suspension. So the Q of clock pendulums is increased by minimizing frictional losses (Γ). Precision pendulums are suspended on low friction pivots consisting of triangular shaped 'knife' edges resting on agate plates. Around 99% of the energy loss in a freeswinging pendulum is due to air friction, so mounting a pendulum in a vacuum tank can increase the Q, and thus the accuracy, by a factor of 100. The Q of pendulums ranges from several thousand in an ordinary clock to several hundred thousand for precision regulator pendulums swinging in vacuum. A quality home pendulum clock might have a Q of 10,000 and an accuracy of 10 seconds per month. The most accurate commercially produced pendulum clock was the Shortt-Synchronome free pendulum clock, invented in 1921. Its Invar master pendulum swinging in a vacuum tank had a Q of 110,000 and an error rate of around a second per year. Their Q of 103–105 is one reason why pendulums are more accurate timekeepers than the balance wheels in watches, with Q around 100–300, but less accurate than the quartz crystals in quartz clocks, with Q of 105–106. Escapement Pendulums (unlike, for example, quartz crystals) have a low enough Q that the disturbance caused by the impulses to keep them moving is generally the limiting factor on their timekeeping accuracy. Therefore, the design of the escapement, the mechanism that provides these impulses, has a large effect on the accuracy of a clock pendulum. If the impulses given to the pendulum by the escapement each swing could be exactly identical, the response of the pendulum would be identical, and its period would be constant. However, this is not achievable; unavoidable random fluctuations in the force due to friction of the clock's pallets, lubrication variations, and changes in the torque provided by the clock's power source as it runs down, mean that the force of the impulse applied by the escapement varies. If these variations in the escapement's force cause changes in the pendulum's width of swing (amplitude), this will cause corresponding slight changes in the period, since (as discussed at top) a pendulum with a finite swing is not quite isochronous. Therefore, the goal of traditional escapement design is to apply the force with the proper profile, and at the correct point in the pendulum's cycle, so force variations have no effect on the pendulum's amplitude. This is called an isochronous escapement. The Airy condition Clockmakers had known for centuries that the disturbing effect of the escapement's drive force on the period of a pendulum is smallest if given as a short impulse as the pendulum passes through its bottom equilibrium position. If the impulse occurs before the pendulum reaches bottom, during the downward swing, it will have the effect of shortening the pendulum's natural period, so an increase in drive force will decrease the period. If the impulse occurs after the pendulum reaches bottom, during the upswing, it will lengthen the period, so an increase in drive force will increase the pendulum's period. In 1826 British astronomer George Airy proved this; specifically, he proved that if a pendulum is driven by an impulse that is symmetrical about its bottom equilibrium position, the pendulum's period will be unaffected by changes in the drive force. The most accurate escapements, such as the deadbeat, approximately satisfy this condition. Gravity measurement The presence of the acceleration of gravity g in the periodicity equation (1) for a pendulum means that the local gravitational acceleration of the Earth can be calculated from the period of a pendulum. A pendulum can therefore be used as a gravimeter to measure the local gravity, which varies by over 0.5% across the surface of the Earth.The value of "g" (acceleration due to gravity) at the equator is 9.780 m/s2 and at the poles is 9.832 m/s2, a difference of 0.53%. The pendulum in a clock is disturbed by the pushes it receives from the clock movement, so freeswinging pendulums were used, and were the standard instruments of gravimetry up to the 1930s. The difference between clock pendulums and gravimeter pendulums is that to measure gravity, the pendulum's length as well as its period has to be measured. The period of freeswinging pendulums could be found to great precision by comparing their swing with a precision clock that had been adjusted to keep correct time by the passage of stars overhead. In the early measurements, a weight on a cord was suspended in front of the clock pendulum, and its length adjusted until the two pendulums swung in exact synchronism. Then the length of the cord was measured. From the length and the period, g could be calculated from equation (1). The seconds pendulum The seconds pendulum, a pendulum with a period of two seconds so each swing takes one second, was widely used to measure gravity, because its period could be easily measured by comparing it to precision regulator clocks, which all had seconds pendulums. By the late 17th century, the length of the seconds pendulum became the standard measure of the strength of gravitational acceleration at a location. By 1700 its length had been measured with submillimeter accuracy at several cities in Europe. For a seconds pendulum, g is proportional to its length: Early observations 1620: British scientist Francis Bacon was one of the first to propose using a pendulum to measure gravity, suggesting taking one up a mountain to see if gravity varies with altitude. 1644: Even before the pendulum clock, French priest Marin Mersenne first determined the length of the seconds pendulum was , by comparing the swing of a pendulum to the time it took a weight to fall a measured distance. He also was first to discover the dependence of the period on amplitude of swing. 1669: Jean Picard determined the length of the seconds pendulum at Paris, using a copper ball suspended by an aloe fiber, obtaining . He also did the first experiments on thermal expansion and contraction of pendulum rods with temperature. 1672: The first observation that gravity varied at different points on Earth was made in 1672 by Jean Richer, who took a pendulum clock to Cayenne, French Guiana and found that it lost minutes per day; its seconds pendulum had to be shortened by lignes (2.6 mm) shorter than at Paris, to keep correct time. In 1687 Isaac Newton in Principia Mathematica showed this was because the Earth had a slightly oblate shape (flattened at the poles) caused by the centrifugal force of its rotation. At higher latitudes the surface was closer to the center of the Earth, so gravity increased with latitude. From this time on, pendulums began to be taken to distant lands to measure gravity, and tables were compiled of the length of the seconds pendulum at different locations on Earth. In 1743 Alexis Claude Clairaut created the first hydrostatic model of the Earth, Clairaut's theorem, which allowed the ellipticity of the Earth to be calculated from gravity measurements. Progressively more accurate models of the shape of the Earth followed. 1687: Newton experimented with pendulums (described in Principia) and found that equal length pendulums with bobs made of different materials had the same period, proving that the gravitational force on different substances was exactly proportional to their mass (inertia). This principle, called the equivalence principle, confirmed to greater accuracy in later experiments, became the foundation on which Albert Einstein based his general theory of relativity. 1737: French mathematician Pierre Bouguer made a sophisticated series of pendulum observations in the Andes mountains, Peru. He used a copper pendulum bob in the shape of a double pointed cone suspended by a thread; the bob could be reversed to eliminate the effects of nonuniform density. He calculated the length to the center of oscillation of thread and bob combined, instead of using the center of the bob. He corrected for thermal expansion of the measuring rod and barometric pressure, giving his results for a pendulum swinging in vacuum. Bouguer swung the same pendulum at three different elevations, from sea level to the top of the high Peruvian altiplano. Gravity should fall with the inverse square of the distance from the center of the Earth. Bouguer found that it fell off slower, and correctly attributed the 'extra' gravity to the gravitational field of the huge Peruvian plateau. From the density of rock samples he calculated an estimate of the effect of the altiplano on the pendulum, and comparing this with the gravity of the Earth was able to make the first rough estimate of the density of the Earth. 1747: Daniel Bernoulli showed how to correct for the lengthening of the period due to a finite angle of swing θ0 by using the first order correction θ02/16, giving the period of a pendulum with an extremely small swing. 1792: To define a pendulum standard of length for use with the new metric system, in 1792 Jean-Charles de Borda and Jean-Dominique Cassini made a precise measurement of the seconds pendulum at Paris. They used a -inch (14 mm) platinum ball suspended by a iron wire. Their main innovation was a technique called the "method of coincidences" which allowed the period of pendulums to be compared with great precision. (Bouguer had also used this method). The time interval Δt between the recurring instants when the two pendulums swung in synchronism was timed. From this the difference between the periods of the pendulums, T1 and T2, could be calculated: 1821: Francesco Carlini made pendulum observations on top of Mount Cenis, Italy, from which, using methods similar to Bouguer's, he calculated the density of the Earth. He compared his measurements to an estimate of the gravity at his location assuming the mountain wasn't there, calculated from previous nearby pendulum measurements at sea level. His measurements showed 'excess' gravity, which he allocated to the effect of the mountain. Modeling the mountain as a segment of a sphere in diameter and high, from rock samples he calculated its gravitational field, and estimated the density of the Earth at 4.39 times that of water. Later recalculations by others gave values of 4.77 and 4.95, illustrating the uncertainties in these geographical methods. Kater's pendulum The precision of the early gravity measurements above was limited by the difficulty of measuring the length of the pendulum, L . L was the length of an idealized simple gravity pendulum (described at top), which has all its mass concentrated in a point at the end of the cord. In 1673 Huygens had shown that the period of a rigid bar pendulum (called a compound pendulum) was equal to the period of a simple pendulum with a length equal to the distance between the pivot point and a point called the center of oscillation, located under the center of gravity, that depends on the mass distribution along the pendulum. But there was no accurate way of determining the center of oscillation in a real pendulum. Huygens' discovery is sometimes referred to as Huygens' law of the (cycloidal) pendulum. To get around this problem, the early researchers above approximated an ideal simple pendulum as closely as possible by using a metal sphere suspended by a light wire or cord. If the wire was light enough, the center of oscillation was close to the center of gravity of the ball, at its geometric center. This "ball and wire" type of pendulum wasn't very accurate, because it didn't swing as a rigid body, and the elasticity of the wire caused its length to change slightly as the pendulum swung. However Huygens had also proved that in any pendulum, the pivot point and the center of oscillation were interchangeable. That is, if a pendulum were turned upside down and hung from its center of oscillation, it would have the same period as it did in the previous position, and the old pivot point would be the new center of oscillation. British physicist and army captain Henry Kater in 1817 realized that Huygens' principle could be used to find the length of a simple pendulum with the same period as a real pendulum. If a pendulum was built with a second adjustable pivot point near the bottom so it could be hung upside down, and the second pivot was adjusted until the periods when hung from both pivots were the same, the second pivot would be at the center of oscillation, and the distance between the two pivots would be the length L of a simple pendulum with the same period. Kater built a reversible pendulum (see drawing) consisting of a brass bar with two opposing pivots made of short triangular "knife" blades (a) near either end. It could be swung from either pivot, with the knife blades supported on agate plates. Rather than make one pivot adjustable, he attached the pivots a meter apart and instead adjusted the periods with a moveable weight on the pendulum rod (b,c). In operation, the pendulum is hung in front of a precision clock, and the period timed, then turned upside down and the period timed again. The weight is adjusted with the adjustment screw until the periods are equal. Then putting this period and the distance between the pivots into equation (1) gives the gravitational acceleration g very accurately. Kater timed the swing of his pendulum using the "method of coincidences" and measured the distance between the two pivots with a micrometer. After applying corrections for the finite amplitude of swing, the buoyancy of the bob, the barometric pressure and altitude, and temperature, he obtained a value of 39.13929 inches for the seconds pendulum at London, in vacuum, at sea level, at 62 °F. The largest variation from the mean of his 12 observations was 0.00028 in. representing a precision of gravity measurement of 7×10−6 (7 mGal or 70 μm/s2). Kater's measurement was used as Britain's official standard of length (see below) from 1824 to 1855. Reversible pendulums (known technically as "convertible" pendulums) employing Kater's principle were used for absolute gravity measurements into the 1930s. Later pendulum gravimeters The increased accuracy made possible by Kater's pendulum helped make gravimetry a standard part of geodesy. Since the exact location (latitude and longitude) of the 'station' where the gravity measurement was made was necessary, gravity measurements became part of surveying, and pendulums were taken on the great geodetic surveys of the 18th century, particularly the Great Trigonometric Survey of India. Invariable pendulums: Kater introduced the idea of relative gravity measurements, to supplement the absolute measurements made by a Kater's pendulum. Comparing the gravity at two different points was an easier process than measuring it absolutely by the Kater method. All that was necessary was to time the period of an ordinary (single pivot) pendulum at the first point, then transport the pendulum to the other point and time its period there. Since the pendulum's length was constant, from (1) the ratio of the gravitational accelerations was equal to the inverse of the ratio of the periods squared, and no precision length measurements were necessary. So once the gravity had been measured absolutely at some central station, by the Kater or other accurate method, the gravity at other points could be found by swinging pendulums at the central station and then taking them to the other location and timing their swing there. Kater made up a set of "invariable" pendulums, with only one knife edge pivot, which were taken to many countries after first being swung at a central station at Kew Observatory, UK. Airy's coal pit experiments: Starting in 1826, using methods similar to Bouguer, British astronomer George Airy attempted to determine the density of the Earth by pendulum gravity measurements at the top and bottom of a coal mine. The gravitational force below the surface of the Earth decreases rather than increasing with depth, because by Gauss's law the mass of the spherical shell of crust above the subsurface point does not contribute to the gravity. The 1826 experiment was aborted by the flooding of the mine, but in 1854 he conducted an improved experiment at the Harton coal mine, using seconds pendulums swinging on agate plates, timed by precision chronometers synchronized by an electrical circuit. He found the lower pendulum was slower by 2.24 seconds per day. This meant that the gravitational acceleration at the bottom of the mine, 1250 ft below the surface, was 1/14,000 less than it should have been from the inverse square law; that is the attraction of the spherical shell was 1/14,000 of the attraction of the Earth. From samples of surface rock he estimated the mass of the spherical shell of crust, and from this estimated that the density of the Earth was 6.565 times that of water. Von Sterneck attempted to repeat the experiment in 1882 but found inconsistent results. Repsold-Bessel pendulum: It was time-consuming and error-prone to repeatedly swing the Kater's pendulum and adjust the weights until the periods were equal. Friedrich Bessel showed in 1835 that this was unnecessary. As long as the periods were close together, the gravity could be calculated from the two periods and the center of gravity of the pendulum. So the reversible pendulum didn't need to be adjustable, it could just be a bar with two pivots. Bessel also showed that if the pendulum was made symmetrical in form about its center, but was weighted internally at one end, the errors due to air drag would cancel out. Further, another error due to the finite diameter of the knife edges could be made to cancel out if they were interchanged between measurements. Bessel didn't construct such a pendulum, but in 1864 Adolf Repsold, under contract by the Swiss Geodetic Commission made a pendulum along these lines. The Repsold pendulum was about 56 cm long and had a period of about second. It was used extensively by European geodetic agencies, and with the Kater pendulum in the Survey of India. Similar pendulums of this type were designed by Charles Pierce and C. Defforges. Von Sterneck and Mendenhall gravimeters: In 1887 Austro-Hungarian scientist Robert von Sterneck developed a small gravimeter pendulum mounted in a temperature-controlled vacuum tank to eliminate the effects of temperature and air pressure. It used a "half-second pendulum," having a period close to one second, about 25 cm long. The pendulum was nonreversible, so the instrument was used for relative gravity measurements, but their small size made them small and portable. The period of the pendulum was picked off by reflecting the image of an electric spark created by a precision chronometer off a mirror mounted at the top of the pendulum rod. The Von Sterneck instrument, and a similar instrument developed by Thomas C. Mendenhall of the United States Coast and Geodetic Survey in 1890, were used extensively for surveys into the 1920s. The Mendenhall pendulum was actually a more accurate timekeeper than the highest precision clocks of the time, and as the 'world's best clock' it was used by Albert A. Michelson in his 1924 measurements of the speed of light on Mt. Wilson, California. Double pendulum gravimeters: Starting in 1875, the increasing accuracy of pendulum measurements revealed another source of error in existing instruments: the swing of the pendulum caused a slight swaying of the tripod stand used to support portable pendulums, introducing error. In 1875 Charles S Peirce calculated that measurements of the length of the seconds pendulum made with the Repsold instrument required a correction of 0.2 mm due to this error. In 1880 C. Defforges used a Michelson interferometer to measure the sway of the stand dynamically, and interferometers were added to the standard Mendenhall apparatus to calculate sway corrections. A method of preventing this error was first suggested in 1877 by Hervé Faye and advocated by Peirce, Cellérier and Furtwangler: mount two identical pendulums on the same support, swinging with the same amplitude, 180° out of phase. The opposite motion of the pendulums would cancel out any sideways forces on the support. The idea was opposed due to its complexity, but by the start of the 20th century the Von Sterneck device and other instruments were modified to swing multiple pendulums simultaneously. Gulf gravimeter: One of the last and most accurate pendulum gravimeters was the apparatus developed in 1929 by the Gulf Research and Development Co.Lenzen & Multauf 1964, p.336, fig.28 It used two pendulums made of fused quartz, each in length with a period of 0.89 second, swinging on pyrex knife edge pivots, 180° out of phase. They were mounted in a permanently sealed temperature and humidity controlled vacuum chamber. Stray electrostatic charges on the quartz pendulums had to be discharged by exposing them to a radioactive salt before use. The period was detected by reflecting a light beam from a mirror at the top of the pendulum, recorded by a chart recorder and compared to a precision crystal oscillator calibrated against the WWV radio time signal. This instrument was accurate to within (0.3–0.5)×10−7 (30–50 microgals or 3–5 nm/s2). It was used into the 1960s. Relative pendulum gravimeters were superseded by the simpler LaCoste zero-length spring gravimeter, invented in 1934 by Lucien LaCoste. Absolute (reversible) pendulum gravimeters were replaced in the 1950s by free fall gravimeters, in which a weight is allowed to fall in a vacuum tank and its acceleration is measured by an optical interferometer. Standard of length Because the acceleration of gravity is constant at a given point on Earth, the period of a simple pendulum at a given location depends only on its length. Additionally, gravity varies only slightly at different locations. Almost from the pendulum's discovery until the early 19th century, this property led scientists to suggest using a pendulum of a given period as a standard of length. Until the 19th century, countries based their systems of length measurement on prototypes, metal bar primary standards, such as the standard yard in Britain kept at the Houses of Parliament, and the standard toise in France, kept at Paris. These were vulnerable to damage or destruction over the years, and because of the difficulty of comparing prototypes, the same unit often had different lengths in distant towns, creating opportunities for fraud. During the Enlightenment scientists argued for a length standard that was based on some property of nature that could be determined by measurement, creating an indestructible, universal standard. The period of pendulums could be measured very precisely by timing them with clocks that were set by the stars. A pendulum standard amounted to defining the unit of length by the gravitational force of the Earth, for all intents constant, and the second, which was defined by the rotation rate of the Earth, also constant. The idea was that anyone, anywhere on Earth, could recreate the standard by constructing a pendulum that swung with the defined period and measuring its length. Virtually all proposals were based on the seconds pendulum, in which each swing (a half period) takes one second, which is about a meter (39 inches) long, because by the late 17th century it had become a standard for measuring gravity (see previous section). By the 18th century its length had been measured with sub-millimeter accuracy at a number of cities in Europe and around the world. The initial attraction of the pendulum length standard was that it was believed (by early scientists such as Huygens and Wren) that gravity was constant over the Earth's surface, so a given pendulum had the same period at any point on Earth. So the length of the standard pendulum could be measured at any location, and would not be tied to any given nation or region; it would be a truly democratic, worldwide standard. Although Richer found in 1672 that gravity varies at different points on the globe, the idea of a pendulum length standard remained popular, because it was found that gravity only varies with latitude. Gravitational acceleration increases smoothly from the equator to the poles, due to the oblate shape of the Earth, so at any given latitude (east–west line), gravity was constant enough that the length of a seconds pendulum was the same within the measurement capability of the 18th century. Thus the unit of length could be defined at a given latitude and measured at any point along that latitude. For example, a pendulum standard defined at 45° north latitude, a popular choice, could be measured in parts of France, Italy, Croatia, Serbia, Romania, Russia, Kazakhstan, China, Mongolia, the United States and Canada. In addition, it could be recreated at any location at which the gravitational acceleration had been accurately measured. By the mid 19th century, increasingly accurate pendulum measurements by Edward Sabine and Thomas Young revealed that gravity, and thus the length of any pendulum standard, varied measurably with local geologic features such as mountains and dense subsurface rocks. So a pendulum length standard had to be defined at a single point on Earth and could only be measured there. This took much of the appeal from the concept, and efforts to adopt pendulum standards were abandoned. Early proposals One of the first to suggest defining length with a pendulum was Flemish scientist Isaac Beeckman who in 1631 recommended making the seconds pendulum "the invariable measure for all people at all times in all places". Marin Mersenne, who first measured the seconds pendulum in 1644, also suggested it. The first official proposal for a pendulum standard was made by the British Royal Society in 1660, advocated by Christiaan Huygens and Ole Rømer, basing it on Mersenne's work, and Huygens in Horologium Oscillatorium proposed a "horary foot" defined as 1/3 of the seconds pendulum. Christopher Wren was another early supporter. The idea of a pendulum standard of length must have been familiar to people as early as 1663, because Samuel Butler satirizes it in Hudibras: Upon the bench I will so handle ‘em That the vibration of this pendulum Shall make all taylors’ yards of one Unanimous opinion In 1671 Jean Picard proposed a pendulum-defined 'universal foot' in his influential Mesure de la Terre. Gabriel Mouton around 1670 suggested defining the toise either by a seconds pendulum or a minute of terrestrial degree. A plan for a complete system of units based on the pendulum was advanced in 1675 by Italian polymath Tito Livio Burratini. In France in 1747, geographer Charles Marie de la Condamine proposed defining length by a seconds pendulum at the equator; since at this location a pendulum's swing wouldn't be distorted by the Earth's rotation. James Steuart (1780) and George Skene Keith were also supporters. By the end of the 18th century, when many nations were reforming their weight and measure systems, the seconds pendulum was the leading choice for a new definition of length, advocated by prominent scientists in several major nations. In 1790, then US Secretary of State Thomas Jefferson proposed to Congress a comprehensive decimalized US 'metric system' based on the seconds pendulum at 38° North latitude, the mean latitude of the United States. No action was taken on this proposal. In Britain the leading advocate of the pendulum was politician John Riggs Miller. When his efforts to promote a joint British–French–American metric system fell through in 1790, he proposed a British system based on the length of the seconds pendulum at London. This standard was adopted in 1824 (below). The metre In the discussions leading up to the French adoption of the metric system in 1791, the leading candidate for the definition of the new unit of length, the metre, was the seconds pendulum at 45° North latitude. It was advocated by a group led by French politician Talleyrand and mathematician Antoine Nicolas Caritat de Condorcet. This was one of the three final options considered by the French Academy of Sciences committee. However, on March 19, 1791, the committee instead chose to base the metre on the length of the meridian through Paris. A pendulum definition was rejected because of its variability at different locations, and because it defined length by a unit of time. (However, since 1983 the metre has been officially defined in terms of the length of the second and the speed of light.) A possible additional reason is that the radical French Academy didn't want to base their new system on the second, a traditional and nondecimal unit from the ancien regime. Although not defined by the pendulum, the final length chosen for the metre, 10−7 of the pole-to-equator meridian arc, was very close to the length of the seconds pendulum (0.9937 m), within 0.63%. Although no reason for this particular choice was given at the time, it was probably to facilitate the use of the seconds pendulum as a secondary standard, as was proposed in the official document. So the modern world's standard unit of length is certainly closely linked historically with the seconds pendulum. Britain and Denmark Britain and Denmark appear to be the only nations that (for a short time) based their units of length on the pendulum. In 1821 the Danish inch was defined as 1/38 of the length of the mean solar seconds pendulum at 45° latitude at the meridian of Skagen, at sea level, in vacuum. The British parliament passed the Imperial Weights and Measures Act in 1824, a reform of the British standard system which declared that if the prototype standard yard was destroyed, it would be recovered by defining the inch so that the length of the solar seconds pendulum at London, at sea level, in a vacuum, at 62 °F was 39.1393 inches. This also became the US standard, since at the time the US used British measures. However, when the prototype yard was lost in the 1834 Houses of Parliament fire, it proved impossible to recreate it accurately from the pendulum definition, and in 1855 Britain repealed the pendulum standard and returned to prototype standards. Other uses Seismometers A pendulum in which the rod is not vertical but almost horizontal was used in early seismometers for measuring Earth tremors. The bob of the pendulum does not move when its mounting does, and the difference in the movements is recorded on a drum chart. Schuler tuning As first explained by Maximilian Schuler in a 1923 paper, a pendulum whose period exactly equals the orbital period of a hypothetical satellite orbiting just above the surface of the Earth (about 84 minutes) will tend to remain pointing at the center of the Earth when its support is suddenly displaced. This principle, called Schuler tuning, is used in inertial guidance systems in ships and aircraft that operate on the surface of the Earth. No physical pendulum is used, but the control system that keeps the inertial platform containing the gyroscopes stable is modified so the device acts as though it is attached to such a pendulum, keeping the platform always facing down as the vehicle moves on the curved surface of the Earth. Coupled pendulums In 1665 Huygens made a curious observation about pendulum clocks. Two clocks had been placed on his mantlepiece, and he noted that they had acquired an opposing motion. That is, their pendulums were beating in unison but in the opposite direction; 180° out of phase. Regardless of how the two clocks were started, he found that they would eventually return to this state, thus making the first recorded observation of a coupled oscillator. The cause of this behavior was that the two pendulums were affecting each other through slight motions of the supporting mantlepiece. This process is called entrainment or mode locking in physics and is observed in other coupled oscillators. Synchronized pendulums have been used in clocks and were widely used in gravimeters in the early 20th century. Although Huygens only observed out-of-phase synchronization, recent investigations have shown the existence of in-phase synchronization, as well as "death" states wherein one or both clocks stops. Religious practice Pendulum motion appears in religious ceremonies as well. The swinging incense burner called a censer, also known as a thurible, is an example of a pendulum. Pendulums are also seen at many gatherings in eastern Mexico where they mark the turning of the tides on the day which the tides are at their highest point. Pendulums may also be used for dowsing. Education Pendulums are widely used in science education as an example of a harmonic oscillator, to teach dynamics and oscillatory motion. One use is to demonstrate the law of conservation of energy. A heavy object such as a bowling ball or wrecking ball is attached to a string. The weight is then moved to within a few inches of a volunteer's face, then released and allowed to swing and come back. In most instances, the weight reverses direction and then returns to (almost) the same position as the original release location — i.e. a small distance from the volunteer's face — thus leaving the volunteer unharmed. On occasion the volunteer is injured if either the volunteer does not stand still or the pendulum is initially released with a push (so that when it returns it surpasses the release position). Torture device It is claimed that the pendulum was used as an instrument of torture and execution by the Spanish Inquisition in the 18th century. The allegation is contained in the 1826 book The history of the Inquisition of Spain by the Spanish priest, historian and liberal activist Juan Antonio Llorente. A swinging pendulum whose edge is a knife blade slowly descends toward a bound prisoner until it cuts into his body. This method of torture came to popular consciousness through the 1842 short story "The Pit and the Pendulum" by American author Edgar Allan Poe. Most knowledgeable sources are skeptical that this torture was ever actually used. The only evidence of its use is one paragraph in the preface to Llorente's 1826 History, relating a second-hand account by a single prisoner released from the Inquisition's Madrid dungeon in 1820, who purportedly described the pendulum torture method. Modern sources point out that due to Jesus' admonition against bloodshed, Inquisitors were only allowed to use torture methods which did not spill blood, and the pendulum method would have violated this stricture. One theory is that Llorente misunderstood the account he heard; the prisoner was actually referring to another common Inquisition torture, the strappado (garrucha), in which the prisoner has his hands tied behind his back and is hoisted off the floor by a rope tied to his hands. This method was also known as the "pendulum". Poe's popular horror tale, and public awareness of the Inquisition's other brutal methods, has kept the myth of this elaborate torture method alive. Pendulum wave A pendulum wave is a physics demonstration and kinetic art comprising several uncoupled pendulums with different lengths. As the pendulums oscillate, they appear to produce travelling and standing waves, beating, and random motion.
Technology
Timekeeping
null
42726
https://en.wikipedia.org/wiki/Cephalopod
Cephalopod
A cephalopod is any member of the molluscan class Cephalopoda (Greek plural , ; "head-feet") such as a squid, octopus, cuttlefish, or nautilus. These exclusively marine animals are characterized by bilateral body symmetry, a prominent head, and a set of arms or tentacles (muscular hydrostats) modified from the primitive molluscan foot. Fishers sometimes call cephalopods "inkfish", referring to their common ability to squirt ink. The study of cephalopods is a branch of malacology known as teuthology. Cephalopods became dominant during the Ordovician period, represented by primitive nautiloids. The class now contains two, only distantly related, extant subclasses: Coleoidea, which includes octopuses, squid, and cuttlefish; and Nautiloidea, represented by Nautilus and Allonautilus. In the Coleoidea, the molluscan shell has been internalized or is absent, whereas in the Nautiloidea, the external shell remains. About 800 living species of cephalopods have been identified. Two important extinct taxa are the Ammonoidea (ammonites) and Belemnoidea (belemnites). Extant cephalopods range in size from the 10 mm (0.3 in) Idiosepius thailandicus to the 700 kilograms (1,500 lb) heavy colossal squid, the largest extant invertebrate. Distribution There are over 800 extant species of cephalopod, although new species continue to be described. An estimated 11,000 extinct taxa have been described, although the soft-bodied nature of cephalopods means they are not easily fossilised. Cephalopods are found in all the oceans of Earth. None of them can tolerate fresh water, but the brief squid, Lolliguncula brevis, found in Chesapeake Bay, is a notable partial exception in that it tolerates brackish water. Cephalopods are thought to be unable to live in fresh water due to multiple biochemical constraints, and in their >400 million year existence have never ventured into fully freshwater habitats. Cephalopods occupy most of the depth of the ocean, from the abyssal plains to the sea surface, and have also been found in the hadal zone. Their diversity is greatest near the equator (~40 species retrieved in nets at 11°N by a diversity study) and decreases towards the poles (~5 species captured at 60°N). Biology Nervous system and behavior Cephalopods are widely regarded as the most intelligent of the invertebrates and have well-developed senses and large brains (larger than those of gastropods). The nervous system of cephalopods is the most complex of the invertebrates and their brain-to-body-mass ratio falls between that of endothermic and ectothermic vertebrates. Captive cephalopods have also been known to climb out of their aquaria, maneuver a distance of the lab floor, enter another aquarium to feed on captive crabs, and return to their own aquarium. The brain is protected in a cartilaginous cranium. The giant nerve fibers of the cephalopod mantle have been widely used for many years as experimental material in neurophysiology; their large diameter (due to lack of myelination) makes them relatively easy to study compared with other animals. Many cephalopods are social creatures; when isolated from their own kind, some species have been observed shoaling with fish. Some cephalopods are able to fly through the air for distances of up to . While cephalopods are not particularly aerodynamic, they achieve these impressive ranges by jet-propulsion; water continues to be expelled from the funnel while the organism is in the air. The animals spread their fins and tentacles to form wings and actively control lift force with body posture. One species, Todarodes pacificus, has been observed spreading tentacles in a flat fan shape with a mucus film between the individual tentacles, while another, Sepioteuthis sepioidea, has been observed putting the tentacles in a circular arrangement. Senses Cephalopods have advanced vision, can detect gravity with statocysts, and have a variety of chemical sense organs. Octopuses use their arms to explore their environment and can use them for depth perception. Vision Most cephalopods rely on vision to detect predators and prey and to communicate with one another. Consequently, cephalopod vision is acute: training experiments have shown that the common octopus can distinguish the brightness, size, shape, and horizontal or vertical orientation of objects. The morphological construction gives cephalopod eyes the same performance as shark eyes; however, their construction differs, as cephalopods lack a cornea and have an everted retina. Cephalopods' eyes are also sensitive to the plane of polarization of light. Unlike many other cephalopods, nautiluses do not have good vision; their eye structure is highly developed, but lacks a solid lens. They have a simple "pinhole" eye through which water can pass. Instead of vision, the animal is thought to use olfaction as the primary sense for foraging, as well as locating or identifying potential mates. All octopuses and most cephalopods are considered to be color blind. Coleoid cephalopods (octopus, squid, cuttlefish) have a single photoreceptor type and lack the ability to determine color by comparing detected photon intensity across multiple spectral channels. When camouflaging themselves, they use their chromatophores to change brightness and pattern according to the background they see, but their ability to match the specific color of a background may come from cells such as iridophores and leucophores that reflect light from the environment. They also produce visual pigments throughout their body and may sense light levels directly from their body. Evidence of color vision has been found in the sparkling enope squid (Watasenia scintillans). It achieves color vision with three photoreceptors, which are based on the same opsin, but use distinct retinal molecules as chromophores: A1 (retinal), A3 (3-dehydroretinal), and A4 (4-hydroxyretinal). The A1-photoreceptor is most sensitive to green-blue (484 nm), the A2-photoreceptor to blue-green (500 nm), and the A4-photoreceptor to blue (470 nm) light. In 2015, a novel mechanism for spectral discrimination in cephalopods was described. This relies on the exploitation of chromatic aberration (wavelength-dependence of focal length). Numerical modeling shows that chromatic aberration can yield useful chromatic information through the dependence of image acuity on accommodation. The unusual off-axis slit and annular pupil shapes in cephalopods enhance this ability by acting as prisms which are scattering white light in all directions. Photoreception In 2015, molecular evidence was published indicating that cephalopod chromatophores are photosensitive; reverse transcription polymerase chain reactions (RT-PCR) revealed transcripts encoding rhodopsin and retinochrome within the retinas and skin of the longfin inshore squid (Doryteuthis pealeii), and the common cuttlefish (Sepia officinalis) and broadclub cuttlefish (Sepia latimanus). The authors claim this is the first evidence that cephalopod dermal tissues may possess the required combination of molecules to respond to light. Hearing Some squids have been shown to detect sound using their statocysts, but, in general, cephalopods are deaf. Use of light Most cephalopods possess an assemblage of skin components that interact with light. These may include iridophores, leucophores, chromatophores and (in some species) photophores. Chromatophores are colored pigment cells that expand and contract in accordance to produce color and pattern which they can use in a startling array of fashions. As well as providing camouflage with their background, some cephalopods bioluminesce, shining light downwards to disguise their shadows from any predators that may lurk below. The bioluminescence is produced by bacterial symbionts; the host cephalopod is able to detect the light produced by these organisms. Bioluminescence may also be used to entice prey, and some species use colorful displays to impress mates, startle predators, or even communicate with one another. Coloration Cephalopods can change their colors and patterns in milliseconds, whether for signalling (both within the species and for warning) or active camouflage, as their chromatophores are expanded or contracted. Although color changes appear to rely primarily on vision input, there is evidence that skin cells, specifically chromatophores, can detect light and adjust to light conditions independently of the eyes. The octopus changes skin color and texture during quiet and active sleep cycles. Cephalopods can use chromatophores like a muscle, which is why they can change their skin hue as rapidly as they do. Coloration is typically stronger in near-shore species than those living in the open ocean, whose functions tend to be restricted to disruptive camouflage. These chromatophores are found throughout the body of the octopus, however, they are controlled by the same part of the brain that controls elongation during jet propulsion to reduce drag. As such, jetting octopuses can turn pale because the brain is unable to achieve both controlling elongation and controlling the chromatophores. Most octopuses mimic select structures in their field of view rather than becoming a composite color of their full background. Evidence of original coloration has been detected in cephalopod fossils dating as far back as the Silurian; these orthoconic individuals bore concentric stripes, which are thought to have served as camouflage. Devonian cephalopods bear more complex color patterns, of unknown function. Chromatophores Coleoids, a shell-less subclass of cephalopods (squid, cuttlefish, and octopuses), have complex pigment containing cells called chromatophores which are capable of producing rapidly changing color patterns. These cells store pigment within an elastic sac which produces the color seen from these cells. Coleoids can change the shape of this sac, called the cytoelastic sacculus, which then causes changes in the translucency and opacity of the cell. By rapidly changing multiple chromatophores of different colors, cephalopods are able to change the color of their skin at astonishing speeds, an adaptation that is especially notable in an organism that sees in black and white. Chromatophores are known to only contain three pigments, red, yellow, and brown, which cannot create the full color spectrum. However, cephalopods also have cells called iridophores, thin, layered protein cells that reflect light in ways that can produce colors chromatophores cannot. The mechanism of iridophore control is unknown, but chromatophores are under the control of neural pathways, allowing the cephalopod to coordinate elaborate displays. Together, chromatophores and iridophores are able to produce a large range of colors and pattern displays. Adaptive value Cephalopods utilize chromatophores' color changing ability in order to camouflage themselves. Chromatophores allow coleoids to blend into many different environments, from coral reefs to the sandy sea floor. The color change of chromatophores works in concert with papillae, epithelial tissue which grows and deforms through hydrostatic motion to change skin texture. Chromatophores are able to perform two types of camouflage, mimicry and color matching. Mimicry is when an organism changes its appearance to appear like a different organism. The squid Sepioteuthis sepioidea has been documented changing its appearance to appear as the non threatening herbivorous parrotfish to approach unaware prey. The octopus Thaumoctopus mimicus is known to mimic a number of different venomous organisms it cohabitates with to deter predators. While background matching, a cephalopod changes its appearance to resemble its surroundings, hiding from its predators or concealing itself from prey. The ability to both mimic other organisms and match the appearance of their surroundings is notable given that cephalopods' vision is monochromatic. Cephalopods also use their fine control of body coloration and patterning to perform complex signaling displays for both conspecific and intraspecific communication. Coloration is used in concert with locomotion and texture to send signals to other organisms. Intraspecifically this can serve as a warning display to potential predators. For example, when the octopus Callistoctopus macropus is threatened, it will turn a bright red brown color speckled with white dots as a high contrast display to startle predators. Conspecifically, color change is used for both mating displays and social communication. Cuttlefish have intricate mating displays from males to females. There is also male to male signaling that occurs during competition over mates, all of which are the product of chromatophore coloration displays. Origin There are two hypotheses about the evolution of color change in cephalopods. One hypothesis is that the ability to change color may have evolved for social, sexual, and signaling functions. Another explanation is that it first evolved because of selective pressures encouraging predator avoidance and stealth hunting. For color change to have evolved as the result of social selection the environment of cephalopods' ancestors would have to fit a number of criteria. One, there would need to be some kind of mating ritual that involved signaling. Two, they would have to experience demonstrably high levels of sexual selection. And three, the ancestor would need to communicate using sexual signals that are visible to a conspecific receiver. For color change to have evolved as the result of natural selection different parameters would have to be met. For one, one would need some phenotypic diversity in body patterning among the population. The species would also need to cohabitate with predators which rely on vision for prey identification. These predators should have a high range of visual sensitivity, detecting not just motion or contrast but also colors. The habitats they occupy would also need to display a diversity of backgrounds. Experiments done in dwarf chameleons testing these hypotheses showed that chameleon taxa with greater capacity for color change had more visually conspicuous social signals but did not come from more visually diverse habitats, suggesting that color change ability likely evolved to facilitate social signaling, while camouflage is a useful byproduct. Because camouflage is used for multiple adaptive purposes in cephalopods, color change could have evolved for one use and the other developed later, or it evolved to regulate trade offs within both. Convergent evolution Color change is widespread in ectotherms including anoles, frogs, mollusks, many fish, insects, and spiders. The mechanism behind this color change can be either morphological or physiological. Morphological change is the result of a change in the density of pigment containing cells and tends to change over longer periods of time. Physiological change, the kind observed in cephalopod lineages, is typically the result of the movement of pigment within the chromatophore, changing where different pigments are localized within the cell. This physiological change typically occurs on much shorter timescales compared to morphological change. Cephalopods have a rare form of physiological color change which utilizes neural control of muscles to change the morphology of their chromatophores. This neural control of chromatophores has evolved convergently in both cephalopods and teleosts fishes. Ink With the exception of the Nautilidae and the species of octopus belonging to the suborder Cirrina, all known cephalopods have an ink sac, which can be used to expel a cloud of dark ink to confuse predators. This sac is a muscular bag which originated as an extension of the hindgut. It lies beneath the gut and opens into the anus, into which its contents – almost pure melanin – can be squirted; its proximity to the base of the funnel means the ink can be distributed by ejected water as the cephalopod uses its jet propulsion. The ejected cloud of melanin is usually mixed, upon expulsion, with mucus, produced elsewhere in the mantle, and therefore forms a thick cloud, resulting in visual (and possibly chemosensory) impairment of the predator, like a smokescreen. However, a more sophisticated behavior has been observed, in which the cephalopod releases a cloud, with a greater mucus content, that approximately resembles the cephalopod that released it (this decoy is referred to as a pseudomorph). This strategy often results in the predator attacking the pseudomorph, rather than its rapidly departing prey. For more information, see Inking behaviors. The ink sac of cephalopods has led to a common name of "inkfish", formerly the pen-and-ink fish. Circulatory system Cephalopods are the only molluscs with a closed circulatory system. Coleoids have two gill hearts (also known as branchial hearts) that move blood through the capillaries of the gills. A single systemic heart then pumps the oxygenated blood through the rest of the body. Like most molluscs, cephalopods use hemocyanin, a copper-containing protein, rather than hemoglobin, to transport oxygen. As a result, their blood is colorless when deoxygenated and turns blue when bonded to oxygen. In oxygen-rich environments and in acidic water, hemoglobin is more efficient, but in environments with little oxygen and in low temperatures, hemocyanin has the upper hand. The hemocyanin molecule is much larger than the hemoglobin molecule, allowing it to bond with 96 or molecules, instead of the hemoglobin's just four. But unlike hemoglobin, which are attached in millions on the surface of a single red blood cell, hemocyanin molecules float freely in the bloodstream. Respiration Cephalopods exchange gases with the seawater by forcing water through their gills, which are attached to the roof of the organism. Water enters the mantle cavity on the outside of the gills, and the entrance of the mantle cavity closes. When the mantle contracts, water is forced through the gills, which lie between the mantle cavity and the funnel. The water's expulsion through the funnel can be used to power jet propulsion. If respiration is used concurrently with jet propulsion, large losses in speed or oxygen generation can be expected. The gills, which are much more efficient than those of other mollusks, are attached to the ventral surface of the mantle cavity. There is a trade-off with gill size regarding lifestyle. To achieve fast speeds, gills need to be small – water will be passed through them quickly when energy is needed, compensating for their small size. However, organisms which spend most of their time moving slowly along the bottom do not naturally pass much water through their cavity for locomotion; thus they have larger gills, along with complex systems to ensure that water is constantly washing through their gills, even when the organism is stationary. The water flow is controlled by contractions of the radial and circular mantle cavity muscles. The gills of cephalopods are supported by a skeleton of robust fibrous proteins; the lack of mucopolysaccharides distinguishes this matrix from cartilage. The gills are also thought to be involved in excretion, with NH4+ being swapped with K+ from the seawater. Locomotion and buoyancy While most cephalopods can move by jet propulsion, this is a very energy-consuming way to travel compared to the tail propulsion used by fish. The efficiency of a propeller-driven waterjet (i.e. Froude efficiency) is greater than a rocket. The relative efficiency of jet propulsion decreases further as animal size increases; paralarvae are far more efficient than juvenile and adult individuals. Since the Paleozoic era, as competition with fish produced an environment where efficient motion was crucial to survival, jet propulsion has taken a back role, with fins and tentacles used to maintain a steady velocity. Whilst jet propulsion is never the sole mode of locomotion, the stop-start motion provided by the jets continues to be useful for providing bursts of high speed – not least when capturing prey or avoiding predators. Indeed, it makes cephalopods the fastest marine invertebrates, and they can out-accelerate most fish. The jet is supplemented with fin motion; in the squid, the fins flap each time that a jet is released, amplifying the thrust; they are then extended between jets (presumably to avoid sinking). Oxygenated water is taken into the mantle cavity to the gills and through muscular contraction of this cavity, the spent water is expelled through the hyponome, created by a fold in the mantle. The size difference between the posterior and anterior ends of this organ control the speed of the jet the organism can produce. The velocity of the organism can be accurately predicted for a given mass and morphology of animal. Motion of the cephalopods is usually backward as water is forced out anteriorly through the hyponome, but direction can be controlled somewhat by pointing it in different directions. Some cephalopods accompany this expulsion of water with a gunshot-like popping noise, thought to function to frighten away potential predators. Cephalopods employ a similar method of propulsion despite their increasing size (as they grow) changing the dynamics of the water in which they find themselves. Thus their paralarvae do not extensively use their fins (which are less efficient at low Reynolds numbers) and primarily use their jets to propel themselves upwards, whereas large adult cephalopods tend to swim less efficiently and with more reliance on their fins. Early cephalopods are thought to have produced jets by drawing their body into their shells, as Nautilus does today. Nautilus is also capable of creating a jet by undulations of its funnel; this slower flow of water is more suited to the extraction of oxygen from the water. When motionless, Nautilus can only extract 20% of oxygen from the water. The jet velocity in Nautilus is much slower than in coleoids, but less musculature and energy is involved in its production. Jet thrust in cephalopods is controlled primarily by the maximum diameter of the funnel orifice (or, perhaps, the average diameter of the funnel) and the diameter of the mantle cavity. Changes in the size of the orifice are used most at intermediate velocities. The absolute velocity achieved is limited by the cephalopod's requirement to inhale water for expulsion; this intake limits the maximum velocity to eight body-lengths per second, a speed which most cephalopods can attain after two funnel-blows. Water refills the cavity by entering not only through the orifices, but also through the funnel. Squid can expel up to 94% of the fluid within their cavity in a single jet thrust. To accommodate the rapid changes in water intake and expulsion, the orifices are highly flexible and can change their size by a factor of 20; the funnel radius, conversely, changes only by a factor of around 1.5. Some octopus species are also able to walk along the seabed. Squids and cuttlefish can move short distances in any direction by rippling of a flap of muscle around the mantle. While most cephalopods float (i.e. are neutrally buoyant or nearly so; in fact most cephalopods are about 2–3% denser than seawater), they achieve this in different ways. Some, such as Nautilus, allow gas to diffuse into the gap between the mantle and the shell; others allow purer water to ooze from their kidneys, forcing out denser salt water from the body cavity; others, like some fish, accumulate oils in the liver; and some octopuses have a gelatinous body with lighter chloride ions replacing sulfate in the body chemistry. Squids are the primary sufferers of negative buoyancy in cephalopods. The negative buoyancy means that some squids, especially those whose habitat depths are rather shallow, have to actively regulate their vertical positions. This means that they must expend energy, often through jetting or undulations, in order to maintain the same depth. As such, the cost of transport of many squids are quite high. That being said, squid and other cephalopod that dwell in deep waters tend to be more neutrally buoyant which removes the need to regulate depth and increases their locomotory efficiency. The Macrotritopus defilippi, or the sand-dwelling octopus, was seen mimicking both the coloration and the swimming movements of the sand-dwelling flounder Bothus lunatus to avoid predators. The octopuses were able to flatten their bodies and put their arms back to appear the same as the flounders as well as move with the same speed and movements. Females of two species, Ocythoe tuberculata and Haliphron atlanticus, have evolved a true swim bladder. Octopus vs. squid locomotion Two of the categories of cephalopods, octopus and squid, are vastly different in their movements despite being of the same class. Octopuses are generally not seen as active swimmers; they are often found scavenging the sea floor instead of swimming long distances through the water. Squid, on the other hand, can be found to travel vast distances, with some moving as much as 2,000 km in 2.5 months at an average pace of 0.9 body lengths per second. There is a major reason for the difference in movement type and efficiency: anatomy. Both octopuses and squids have mantles (referenced above) which function towards respiration and locomotion in the form of jetting. The composition of these mantles differs between the two families, however. In octopuses, the mantle is made up of three muscle types: longitudinal, radial, and circular. The longitudinal muscles run parallel to the length of the octopus and they are used in order to keep the mantle the same length throughout the jetting process. Given that they are muscles, it can be noted that this means the octopus must actively flex the longitudinal muscles during jetting in order to keep the mantle at a constant length. The radial muscles run perpendicular to the longitudinal muscles and are used to thicken and thin the wall of the mantle. Finally, the circular muscles are used as the main activators in jetting. They are muscle bands that surround the mantle and expand/contract the cavity. All three muscle types work in unison to produce a jet as a propulsion mechanism. Squids do not have the longitudinal muscles that octopus do. Instead, they have a tunic. This tunic is made of layers of collagen and it surrounds the top and the bottom of the mantle. Because they are made of collagen and not muscle, the tunics are rigid bodies that are much stronger than the muscle counterparts. This provides the squids some advantages for jet propulsion swimming. The stiffness means that there is no necessary muscle flexing to keep the mantle the same size. In addition, tunics take up only 1% of the squid mantle's wall thickness, whereas the longitudinal muscle fibers take up to 20% of the mantle wall thickness in octopuses. Also because of the rigidity of the tunic, the radial muscles in squid can contract more forcefully. The mantle is not the only place where squids have collagen. Collagen fibers are located throughout the other muscle fibers in the mantle. These collagen fibers act as elastics and are sometimes named "collagen springs". As the name implies, these fibers act as springs. When the radial and circular muscles in the mantle contract, they reach a point where the contraction is no longer efficient to the forward motion of the creature. In such cases, the excess contraction is stored in the collagen which then efficiently begins or aids in the expansion of the mantle at the end of the jet. In some tests, the collagen has been shown to be able to begin raising mantle pressure up to 50ms before muscle activity is initiated. These anatomical differences between squid and octopuses can help explain why squid can be found swimming comparably to fish while octopuses usually rely on other forms of locomotion on the sea floor such as bipedal walking, crawling, and non-jetting swimming. Shell Nautiluses are the only extant cephalopods with a true external shell. However, all molluscan shells are formed from the ectoderm (outer layer of the embryo); in cuttlefish (Sepia spp.), for example, an invagination of the ectoderm forms during the embryonic period, resulting in a shell (cuttlebone) that is internal in the adult. The same is true of the chitinous gladius of squid and octopuses. Cirrate octopods have arch-shaped cartilaginous fin supports, which are sometimes referred to as a "shell vestige" or "gladius". The Incirrina have either a pair of rod-shaped stylets or no vestige of an internal shell, and some squid also lack a gladius. The shelled coleoids do not form a clade or even a paraphyletic group. The Spirula shell begins as an organic structure, and is then very rapidly mineralized. Shells that are "lost" may be lost by resorption of the calcium carbonate component. Females of the octopus genus Argonauta secrete a specialized paper-thin egg case in which they reside, and this is popularly regarded as a "shell", although it is not attached to the body of the animal and has a separate evolutionary origin. The largest group of shelled cephalopods, the ammonites, are extinct, but their shells are very common as fossils. The deposition of carbonate, leading to a mineralized shell, appears to be related to the acidity of the organic shell matrix (see Mollusc shell); shell-forming cephalopods have an acidic matrix, whereas the gladius of squid has a basic matrix. The basic arrangement of the cephalopod outer wall is: an outer (spherulitic) prismatic layer, a laminar (nacreous) layer and an inner prismatic layer. The thickness of every layer depends on the taxa. In modern cephalopods, the Ca carbonate is aragonite. As for other mollusc shells or coral skeletons, the smallest visible units are irregular rounded granules. Head appendages Cephalopods, as the name implies, have muscular appendages extending from their heads and surrounding their mouths. These are used in feeding, mobility, and even reproduction. In coleoids they number eight or ten. Decapods such as cuttlefish and squid have five pairs. The longer two, termed "tentacles", are actively involved in capturing prey; they can lengthen rapidly (in as little as 15 milliseconds). In giant squid, they may reach a length of 8 metres. They may terminate in a broadened, sucker-coated club. The shorter four pairs are termed arms, and are involved in holding and manipulating the captured organism. They too have suckers, on the side closest to the mouth; these help to hold onto the prey. Octopods only have four pairs of sucker-coated arms, as the name suggests, though developmental abnormalities can modify the number of arms expressed. The tentacle consists of a thick central nerve cord (which must be thick to allow each sucker to be controlled independently) surrounded by circular and radial muscles. Because the volume of the tentacle remains constant, contracting the circular muscles decreases the radius and permits the rapid increase in length. Typically, a 70% lengthening is achieved by decreasing the width by 23%. The shorter arms lack this capability. The size of the tentacle is related to the size of the buccal cavity; larger, stronger tentacles can hold prey as small bites are taken from it; with more numerous, smaller tentacles, prey is swallowed whole, so the mouth cavity must be larger. Externally shelled nautilids (Nautilus and Allonautilus) have on the order of 90 finger-like appendages, termed tentacles, which lack suckers but are sticky instead, and are partly retractable. Feeding All living cephalopods have a two-part beak; most have a radula, although it is reduced in most octopus and absent altogether in Spirula. They feed by capturing prey with their tentacles, drawing it into their mouth and taking bites from it. They have a mixture of toxic digestive juices, some of which are manufactured by symbiotic algae, which they eject from their salivary glands onto their captured prey held in their mouths. These juices separate the flesh of their prey from the bone or shell. The salivary gland has a small tooth at its end which can be poked into an organism to digest it from within. The digestive gland itself is rather short. It has four elements, with food passing through the crop, stomach and caecum before entering the intestine. Most digestion, as well as the absorption of nutrients, occurs in the digestive gland, sometimes called the liver. Nutrients and waste materials are exchanged between the gut and the digestive gland through a pair of connections linking the gland to the junction of the stomach and caecum. Cells in the digestive gland directly release pigmented excretory chemicals into the lumen of the gut, which are then bound with mucus passed through the anus as long dark strings, ejected with the aid of exhaled water from the funnel. Cephalopods tend to concentrate ingested heavy metals in their body tissue. However, octopus arms use a family of cephalopod-specific chemotactile receptors (CRs) to be their "taste by touch" system. Radula The cephalopod radula consists of multiple symmetrical rows of up to nine teeth – thirteen in fossil classes. The organ is reduced or even vestigial in certain octopus species and is absent in Spirula. The teeth may be homodont (i.e. similar in form across a row), heterodont (otherwise), or ctenodont (comb-like). Their height, width and number of cusps is variable between species. The pattern of teeth repeats, but each row may not be identical to the last; in the octopus, for instance, the sequence repeats every five rows. Cephalopod radulae are known from fossil deposits dating back to the Ordovician. They are usually preserved within the cephalopod's body chamber, commonly in conjunction with the mandibles; but this need not always be the case; many radulae are preserved in a range of settings in the Mason Creek. Radulae are usually difficult to detect, even when they are preserved in fossils, as the rock must weather and crack in exactly the right fashion to expose them; for instance, radulae have only been found in nine of the 43 ammonite genera, and they are rarer still in non-ammonoid forms: only three pre-Mesozoic species possess one. Excretory system Most cephalopods possess a single pair of large nephridia. Filtered nitrogenous waste is produced in the pericardial cavity of the branchial hearts, each of which is connected to a nephridium by a narrow canal. The canal delivers the excreta to a bladder-like renal sac, and also resorbs excess water from the filtrate. Several outgrowths of the lateral vena cava project into the renal sac, continuously inflating and deflating as the branchial hearts beat. This action helps to pump the secreted waste into the sacs, to be released into the mantle cavity through a pore. Nautilus, unusually, possesses four nephridia, none of which are connected to the pericardial cavities. The incorporation of ammonia is important for shell formation in terrestrial molluscs and other non-molluscan lineages. Because protein (i.e., flesh) is a major constituent of the cephalopod diet, large amounts of ammonium ions are produced as waste. The main organs involved with the release of this excess ammonium are the gills. The rate of release is lowest in the shelled cephalopods Nautilus and Sepia as a result of their using nitrogen to fill their shells with gas to increase buoyancy. Other cephalopods use ammonium in a similar way, storing the ions (as ammonium chloride) to reduce their overall density and increase buoyancy. Reproduction and life cycle Cephalopods are a diverse group of species, but share common life history traits, for example, they have a rapid growth rate and short life spans. Stearns (1992) suggested that in order to produce the largest possible number of viable offspring, spawning events depend on the ecological environmental factors of the organism. The majority of cephalopods do not provide parental care to their offspring, except, for example, octopus, which helps this organism increase the survival rate of their offspring. Marine species' life cycles are affected by various environmental conditions. The development of a cephalopod embryo can be greatly affected by temperature, oxygen saturation, pollution, light intensity, and salinity. These factors are important to the rate of embryonic development and the success of hatching of the embryos. Food availability also plays an important role in the reproductive cycle of cephalopods. A limitation of food influences the timing of spawning along with their function and growth. Spawning time and spawning vary among marine species; it's correlated with temperature, though cephalopods in shallow water spawn in cold months so that the offspring would hatch at warmer temperatures. Breeding can last from several days to a month. Sexual maturity Cephalopods that are sexually mature and of adult size begin spawning and reproducing. After the transfer of genetic material to the following generation, the adult cephalopods in most species then die. Sexual maturation in male and female cephalopods can be observed internally by the enlargement of gonads and accessory glands. Mating would be a poor indicator of sexual maturation in females; they can receive sperm when not fully reproductively mature and store them until they are ready to fertilize the eggs. Males are more aggressive in their pre-mating competition when in the presence of immature females than when competing for a sexually mature female. Most cephalopod males develop a hectocotylus, an arm tip which is capable of transferring their spermatozoa into the female mantle cavity. Though not all species use a hectocotylus; for example, the adult nautilus releases a spadix. Some male squids, mainly deep-water species, have instead evolved a penis longer than their own body length, the longest penis in any free-living animals. It is assumed these males simply attach a spermatophore anywhere on a female's body. An indication of sexual maturity of females is the development of brachial photophores to attract mates. Fertilization Cephalopods are not broadcast spawners. During the process of fertilization, the females use sperm provided by the male via external fertilization. Internal fertilization is seen only in octopuses. The initiation of copulation begins when the male catches a female and wraps his arm around her, either in a "male to female neck" position or mouth to mouth position, depending on the species. The males then initiate the process of fertilization by contracting their mantle several times to release the spermatozoa. Cephalopods often mate several times, which influences males to mate longer with females that have previously, nearly tripling the number of contractions of the mantle. To ensure the fertilization of the eggs, female cephalopods release a sperm-attracting peptide through the gelatinous layers of the egg to direct the spermatozoa. Female cephalopods lay eggs in clutches; each egg is composed of a protective coat to ensure the safety of the developing embryo when released into the water column. Reproductive strategies differ between cephalopod species. In the giant Pacific octopus, large eggs are laid in a den; it will often take several days to lay all of them. Once the eggs are released and normally attached to a sheltered substrate, the female usually die shortly after, but octopuses and a few squids will look after their eggs afterwards. Others, like the Japanese flying squid, will spawn neutrally buoyant egg masses which will float at the interface between water layers of slightly different densities, or the female will swim around while carrying the eggs with her. Most species are semelparous (only reproduce once before dying), the only known exceptions are the vampire squid, the lesser Pacific striped octopus and the nautilus, which are iteroparous. In some species of cephalopods, egg clutches are anchored to substrates by a mucilaginous adhesive substance. These eggs are swelled with perivitelline fluid (PVF), a hypertonic fluid that prevents premature hatching. Fertilized egg clusters are neutrally buoyant depending on the depth that they were laid, but can also be found in substrates such as sand, a matrix of corals, or seaweed. Because these species do not provide parental care for their offspring, egg capsules can be injected with ink by the female in order to camouflage the embryos from predators. Male–male competition Most cephalopods engage in aggressive sex: a protein in the male capsule sheath stimulates this behavior. They also engage in male–male aggression, where larger males tend to win the interactions. When a female is near, the males charge one another continuously and flail their arms. If neither male backs away, the arms extend to the back, exposing the mouth, followed by the biting of arm tips. During mate competition males also participate in a technique called flushing. This technique is used by the second male attempting to mate with a female. Flushing removes spermatophores in the buccal cavity that was placed there by the first mate by forcing water into the cavity. Another behavior that males engage in is sneaker mating or mimicry – smaller males adjust their behavior to that of a female in order to reduce aggression. By using this technique, they are able to fertilize the eggs while the larger male is distracted by a different male. During this process, the sneaker males quickly insert drop-like sperm into the seminal receptacle. Mate choice Mate choice is seen in cuttlefish species, where females prefer some males over others, though characteristics of the preferred males are unknown. A hypothesis states that females reject males by olfactory cues rather than visual cues. Several cephalopod species are polyandrous – accepting and storing multiple male spermatophores, which has been identified by DNA fingerprinting. Females are no longer receptive to mating attempts when holding their eggs in their arms. Females can store sperm in two places (1) the buccal cavity where recently mated males place their spermatophores, and (2) the internal sperm-storage receptacles where sperm packages from previous males are stored. Spermatophore storage results in sperm competition; which states that the female controls which mate fertilizes the eggs. In order to reduce this sort of competition, males develop agonistic behaviors like mate guarding and flushing. The Hapalochlaena lunulata, or the blue-ringed octopus, readily mates with both males and females. Sexual dimorphism In a variety of marine organisms, it is seen that females are larger in size compared to the males in some closely related species. In some lineages, such as the blanket octopus, males become structurally smaller and smaller resembling a term, "dwarfism" dwarf males usually occurs at low densities. The blanket octopus male is an example of sexual-evolutionary dwarfism; females grow 10,000 to 40,000 times larger than the males and the sex ratio between males and females can be distinguished right after hatching of the eggs. Embryology Cephalopod eggs span a large range of sizes, from 1 to 30 mm in diameter. The fertilised ovum initially divides to produce a disc of germinal cells at one pole, with the yolk remaining at the opposite pole. The germinal disc grows to envelop and eventually absorb the yolk, forming the embryo. The tentacles and arms first appear at the hind part of the body, where the foot would be in other molluscs, and only later migrate towards the head. The funnel of cephalopods develops on the top of their head, whereas the mouth develops on the opposite surface. The early embryological stages are reminiscent of ancestral gastropods and extant Monoplacophora. The shells develop from the ectoderm as an organic framework which is subsequently mineralized. In Sepia, which has an internal shell, the ectoderm forms an invagination whose pore is sealed off before this organic framework is deposited. Development The length of time before hatching is highly variable; smaller eggs in warmer waters are the fastest to hatch, and newborns can emerge after as little as a few days. Larger eggs in colder waters can develop for over a year before hatching. The process from spawning to hatching follows a similar trajectory in all species, the main variable being the amount of yolk available to the young and when it is absorbed by the embryo. Unlike most other molluscs, cephalopods do not have a morphologically distinct larval stage. Instead, the juveniles are known as paralarvae. They quickly learn how to hunt, using encounters with prey to refine their strategies. Growth in juveniles is usually allometric, whilst adult growth is isometric. Evolution The traditional view of cephalopod evolution holds that they evolved in the Late Cambrian from a monoplacophoran-like ancestor with a curved, tapering shell, which was closely related to the gastropods (snails). The similarity of the early shelled cephalopod Plectronoceras to some gastropods was used in support of this view. The development of a siphuncle would have allowed the shells of these early forms to become gas-filled (thus buoyant) in order to support them and keep the shells upright while the animal crawled along the floor, and separated the true cephalopods from putative ancestors such as Knightoconus, which lacked a siphuncle. Neutral or positive buoyancy (i.e. the ability to float) would have come later, followed by swimming in the Plectronocerida and eventually jet propulsion in more derived cephalopods. Possible early Cambrian remains have been found in the Avalon Peninsula, matching genetic data for a pre-Cambrian origin. However, this specimen is later shown that is a chimerical fossil. In 2010, some researchers proposed that Nectocaris pteryx is the early cephalopod, which did not have a shell and appeared to possess jet propulsion in the manner of "derived" cephalopods, complicated the question of the order in which cephalopod features developed. However, most of other researchers do not agree that Nectocaris actually being a cephalopod or even mollusk. Early cephalopods were likely predators near the top of the food chain. After the late Cambrian extinction led to the disappearance of many radiodonts, predatory niches became available for other animals. During the Ordovician period, the primitive cephalopods underwent pulses of diversification to become diverse and dominant in the Paleozoic and Mesozoic seas. In the Early Palaeozoic, their range was far more restricted than today; they were mainly constrained to sublittoral regions of shallow shelves of the low latitudes, and usually occurred in association with thrombolites. A more pelagic habit was gradually adopted as the Ordovician progressed. Deep-water cephalopods, whilst rare, have been found in the Lower Ordovician – but only in high-latitude waters. The mid-Ordovician saw the first cephalopods with septa strong enough to cope with the pressures associated with deeper water, and could inhabit depths greater than 100–200 m. The direction of shell coiling would prove to be crucial to the future success of the lineages; endogastric coiling would only permit large size to be attained with a straight shell, whereas exogastric coiling – initially rather rare – permitted the spirals familiar from the fossil record to develop, with their corresponding large size and diversity. (Endogastric means the shell is curved so as the ventral or lower side is longitudinally concave (abdomen in); exogastric means the shell is curved so as the ventral side is longitudinally convex (abdomen out) allowing the funnel to be pointed backward beneath the shell.) The ancestors of coleoids (including most modern cephalopods) and the ancestors of the modern nautilus, had diverged by the Floian Age of the Early Ordovician Period, over 470 million years ago. The Bactritida, a Devonian–Triassic group of orthocones, are widely held to be paraphyletic without the coleoids and ammonoids, that is, the latter groups arose from within the Bactritida. An increase in the diversity of the coleoids and ammonoids is observed around the start of the Devonian period and corresponds with a profound increase in fish diversity. This could represent the origin of the two derived groups. Unlike most modern cephalopods, most ancient varieties had protective shells. These shells at first were conical but later developed into curved nautiloid shapes seen in modern nautilus species. Competitive pressure from fish is thought to have forced the shelled forms into deeper water, which provided an evolutionary pressure towards shell loss and gave rise to the modern coleoids, a change which led to greater metabolic costs associated with the loss of buoyancy, but which allowed them to recolonize shallow waters. However, some of the straight-shelled nautiloids evolved into belemnites. The loss of the shell may also have resulted from evolutionary pressure to increase maneuverability, resulting in a more fish-like habit. There has been debate on the embryological origin of cephalopod appendages. Until the mid-20th century, the "Arms as Head" hypothesis was widely recognized. In this theory, the arms and tentacles of cephalopods look similar to the head appendages of gastropods, suggesting that they might be homologous structures. Cephalopod appendages surround the mouth, so logically they could be derived from embryonic head tissues. However, the "Arms as Foot" hypothesis, proposed by Adolf Naef in 1928, has increasingly been favoured; for example, fate mapping of limb buds in the chambered nautilus indicates that limb buds originate from "foot" embryonic tissues. Genetics The sequencing of a full cephalopod genome has remained challenging to researchers due to the length and repetition of their DNA. The characteristics of cephalopod genomes were initially hypothesized to be the result of entire genome duplications. Following the full sequencing of a California two-spot octopus, the genome showed similar patterns to other marine invertebrates with significant additions to the genome assumed to be unique to cephalopods. No evidence of full genome duplication was found. Within the California two-spot octopus genome there are substantial replications of two gene families. Significantly, the expanded gene families were only previously known to exhibit replicative behaviour within vertebrates. The first gene family was identified as the protocadherins which are attributed to neuron development. Protocadherins function as cell adhesion molecules, essential for synaptic specificity. The mechanism for protocadherin gene family replication in vertebrates is attributed to complex splicing, or cutting and pasting, from a locus. Following the sequencing of the California two-spot octopus, researchers found that the protocadherin gene family in cephalopods has expanded in the genome due to tandem gene duplication. The different replication mechanisms for protocadherin genes indicate an independent evolution of protocadherin gene expansion in vertebrates and invertebrates. Analysis of individual cephalopod protocadherin genes indicate independent evolution between species of cephalopod. A species of shore squid Doryteuthis pealeii with expanded protocadherin gene families differ significantly from those of the California two-spot octopus suggesting gene expansion did not occur before speciation within cephalopods. Despite different mechanisms for gene expansion, the two-spot octopus protocadherin genes were more similar to vertebrates than squid, suggesting a convergent evolution mechanism. The second gene family known as are small proteins that function as zinc transcription factors. are understood to moderate DNA, RNA and protein functions within the cell. The sequenced California two spot octopus genome also showed a significant presence of transposable elements as well as transposon expression. Although the role of transposable elements in marine vertebrates is still relatively unknown, significant expression of transposons in nervous system tissues have been observed. In a study conducted on vertebrates, the expression of transposons during development in the fruitfly Drosophila melanogaster activated genomic diversity between neurons. This diversity has been linked to increased memory and learning in mammals. The connection between transposons and increased neuron capability may provide insight into the observed intelligence, memory and function of cephalopods. Using long-read sequencing, researchers have decoded the cephalopod genomes and discovered they have been churned and scrambled. The genes were compared to those of thousands of other species and while blocks of three or more genes co-occurred between squid and octopus, the blocks of genes were not found together in any other animals'. Many of the groupings were in the nervous tissue, suggesting the course they adapted their intelligence. Phylogeny The approximate consensus of extant cephalopod phylogeny, after Whalen & Landman (2022), is shown in the cladogram. Mineralized taxa are in bold. The internal phylogeny of the cephalopods is difficult to constrain; many molecular techniques have been adopted, but the results produced are conflicting. Nautilus tends to be considered an outgroup, with Vampyroteuthis forming an outgroup to other squid; however in one analysis the nautiloids, octopus and teuthids plot as a polytomy. Some molecular phylogenies do not recover the mineralized coleoids (Spirula, Sepia, and Metasepia) as a clade; however, others do recover this more parsimonious-seeming clade, with Spirula as a sister group to Sepia and Metasepia in a clade that had probably diverged before the end of the Triassic. Molecular estimates for clade divergence vary. One 'statistically robust' estimate has Nautilus diverging from Octopus at . Taxonomy The classification presented here, for recent cephalopods, follows largely from Current Classification of Recent Cephalopoda (May 2001), for fossil cephalopods takes from Arkell et al. 1957, Teichert and Moore 1964, Teichert 1988, and others. The three subclasses are traditional, corresponding to the three orders of cephalopods recognized by Bather. Class Cephalopoda († indicates extinct groups) Subclass Nautiloidea: Fundamental ectocochliate cephalopods that provided the source for the Ammonoidea and Coleoidea. Order † Plectronocerida: the ancestral cephalopods from the Cambrian Period Order † Ellesmerocerida () Order † Endocerida () Order † Actinocerida () Order † Discosorida () Order † Pseudorthocerida () Order † Tarphycerida () Order † Oncocerida () Order Nautilida (extant; 410.5 Ma to present) Order † Orthocerida () Order † Ascocerida () Order † Bactritida () Subclass † Ammonoidea: ammonites () Order † Goniatitida () Order † Ceratitida () Order † Ammonitida () Subclass Coleoidea (410.0 Ma-Rec) Cohort † Belemnoidea: Belemnites and kin Genus † Jeletzkya Order † Aulacocerida () Order † Phragmoteuthida () Order † Hematitida () Order † Belemnitida () Genus † Belemnoteuthis () Cohort Neocoleoidea Superorder Decapodiformes (also known as Decabrachia or Decembranchiata) Order Spirulida: ram's horn squid Order Sepiida: cuttlefish Order Sepiolida: pygmy, bobtail and bottletail squid Order Idiosepida Order Oegopsida: neritic squid Order Myopsida: coastal squid Order Bathyteuthida Superorder Octopodiformes (also known as Vampyropoda) Family † Trachyteuthididae Order Vampyromorphida: vampire squid Order Octopoda: octopus Superorder † Palaeoteuthomorpha Order † Boletzkyida Other classifications differ, primarily in how the various decapod orders are related, and whether they should be orders or families. Suprafamilial classification of the Treatise This is the older classification that combines those found in parts K and L of the Treatise on Invertebrate Paleontology, which forms the basis for and is retained in large part by classifications that have come later. Nautiloids in general (Teichert and Moore, 1964) sequence as given. Subclass † Endoceratoidea. Not used by Flower, e.g. Flower and Kummel 1950, interjocerids included in the Endocerida. Order † Endocerida Order † Intejocerida Subclass † Actinoceratoidea Not used by Flower, ibid Order † Actinocerida Subclass Nautiloidea Nautiloidea in the restricted sense. Order † Ellesmerocerida Plectronocerida subsequently split off as separate order. Order † Orthocerida Includes orthocerids and pseudorthocerids Order † Ascocerida Order † Oncocerida Order † Discosorida Order † Tarphycerida Order † Barrandeocerida A polyphyletic group now included in the Tarphycerida Order Nautilida Subclass † Bactritoidea Order † Bactritida Paleozoic Ammonoidea (Miller, Furnish and Schindewolf, 1957) Suborder † Anarcestina Suborder † Clymeniina Suborder † Goniatitina Suborder † Prolecanitina Mesozoic Ammonoidea (Arkel et al., 1957) Suborder † Ceratitina Suborder † Phylloceratina Suborder † Lytoceratina Suborder † Ammonitina Subsequent revisions include the establishment of three Upper Cambrian orders, the Plectronocerida, Protactinocerida, and Yanhecerida; separation of the pseudorthocerids as the Pseudorthocerida, and elevating orthoceratid as the Subclass Orthoceratoidea. Shevyrev classification Shevyrev (2005) suggested a division into eight subclasses, mostly comprising the more diverse and numerous fossil forms, although this classification has been criticized as arbitrary, lacking evidence, and based on misinterpretations of other papers. Class Cephalopoda Subclass † Ellesmeroceratoidea Order † Plectronocerida () Order † Protactinocerida Order † Yanhecerida Order † Ellesmerocerida () Subclass † Endoceratoidea () Order † Endocerida () Order † Intejocerida () Subclass † Actinoceratoidea Order † Actinocerida () Subclass Nautiloidea (490.0 Ma- Rec) Order † Basslerocerida () Order † Tarphycerida () Order † Lituitida () Order † Discosorida () Order † Oncocerida () Order Nautilida (410.5 Ma-Rec) Subclass † Orthoceratoidea () Order † Orthocerida () Order † Ascocerida () Order † Dissidocerida () Order † Bajkalocerida Subclass † Bactritoidea () Subclass † Ammonoidea () Subclass Coleoidea (410.0 Ma-rec) Cladistic classification Another recent system divides all cephalopods into two clades. One includes nautilus and most fossil nautiloids. The other clade (Neocephalopoda or Angusteradulata) is closer to modern coleoids, and includes belemnoids, ammonoids, and many orthocerid families. There are also stem group cephalopods of the traditional Ellesmerocerida that belong to neither clade. The coleoids, despite some doubts, appear from molecular data to be monophyletic. In culture Ancient seafaring people were aware of cephalopods, as evidenced by such artworks as a stone carving found in the archaeological recovery from Bronze Age Minoan Crete at Knossos (1900 – 1100 BC), which has a depiction of a fisherman carrying an octopus. The terrifyingly powerful Gorgon of Greek mythology may have been inspired by the octopus or squid, the octopus's body representing the severed head of Medusa, the beak as the protruding tongue and fangs, and its tentacles as the snakes. The kraken is a legendary sea monster of giant proportions said to dwell off the coasts of Norway and Greenland, usually portrayed in art as a giant cephalopod attacking ships. Linnaeus included it in the first edition of his 1735 Systema Naturae. In a Hawaiian creation myth that says the present cosmos is the last of a series which arose in stages from the ruins of the previous universe, the octopus is the lone survivor of the previous, alien universe. The Akkorokamui is a gigantic tentacled monster from Ainu folklore. A battle with an octopus plays a significant role in Victor Hugo's book Travailleurs de la mer (Toilers of the Sea), relating to his time in exile on Guernsey. Ian Fleming's 1966 short story collection Octopussy and The Living Daylights, and the 1983 James Bond film were partly inspired by Hugo's book. Japanese erotic art, shunga, includes ukiyo-e woodblock prints such as Katsushika Hokusai's 1814 print Tako to ama (The Dream of the Fisherman's Wife), in which an ama diver is sexually intertwined with a large and a small octopus. The print is a forerunner of tentacle erotica. Its many arms that emanate from a common center means that the octopus is sometimes used to symbolize a powerful and manipulative organization.
Biology and health sciences
Cephalopods
Animals
42799
https://en.wikipedia.org/wiki/Speech%20synthesis
Speech synthesis
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output. The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s. A text-to-speech system (or "engine") is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations), which is then imposed on the output speech. History Long before the invention of electronic signal processing, some people tried to build machines to emulate human speech. There were also legends of the existence of "Brazen Heads", such as those involving Pope Silvester II (d. 1003 AD), Albertus Magnus (1198–1280), and Roger Bacon (1214–1294). In 1779, the German-Danish scientist Christian Gottlieb Kratzenstein won the first prize in a competition announced by the Russian Imperial Academy of Sciences and Arts for models he built of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation: , , , and ). There followed the bellows-operated "acoustic-mechanical speech machine" of Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, and in 1846, Joseph Faber exhibited the "Euphonia". In 1923, Paget resurrected Wheatstone's design. In the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder (Voice Demonstrator), which he exhibited at the 1939 New York World's Fair. Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950. There were several different versions of this hardware device; only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues discovered acoustic cues for the perception of phonetic segments (consonants and vowels). Electronic devices The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda et al. developed the first general English text-to-speech system in 1968, at the Electrotechnical Laboratory in Japan. In 1961, physicist John Larry Kelly, Jr and his colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as astronaut Dave Bowman puts it to sleep. Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues. Linear predictive coding (LPC), a form of speech coding, began development with the work of Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. Further developments in LPC technology were made by Bishnu S. Atal and Manfred R. Schroeder at Bell Labs during the 1970s. LPC was later the basis for early speech synthesizer chips, such as the Texas Instruments LPC Speech Chips used in the Speak & Spell toys from 1978. In 1975, Fumitada Itakura developed the line spectral pairs (LSP) method for high-compression speech coding, while at NTT. From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method. In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet. In 1975, MUSA was released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "a cappella" style. Dominant systems in the 1980s and 1990s were the DECtalk system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system; the latter was one of the first multilingual language-independent systems, making extensive use of natural language processing methods. Handheld electronics featuring speech synthesis began emerging in the 1970s. One of the first was the Telesensory Systems Inc. (TSI) Speech+ portable calculator for the blind in 1976. Other devices had primarily educational purposes, such as the Speak & Spell toy produced by Texas Instruments in 1978. Fidelity released a speaking version of its electronic chess computer in 1979. The first video game to feature speech synthesis was the 1980 shoot 'em up arcade game, Stratovox (known in Japan as Speak & Rescue), from Sun Electronics. The first personal computer game with speech synthesis was Manbiki Shoujo (Shoplifting Girl), released in 1980 for the PET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform. Another early example, the arcade version of Berzerk, also dates from 1980. The Milton Bradley Company produced the first multi-player electronic game using voice synthesis, Milton, in the same year. In 1976, Computalker Consultants released their CT-1 Speech Synthesizer. Designed by D. Lloyd Rice and Jim Cooper, it was an analog synthesizer built to work with microcomputers using the S-100 bus standard. Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech. Synthesized voices typically sounded male until 1990, when Ann Syrdal, at AT&T Bell Laboratories, created a female voice. Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs. Synthesizer technologies The most important qualities of a speech synthesis system are naturalness and intelligibility. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics. The two primary technologies generating synthetic speech waveforms are concatenative synthesis and formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used. Concatenation synthesis Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis. Unit selection synthesis Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At run time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree. Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems. Diphone synthesis Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA. or more recent techniques such as pitch modification in the source domain using discrete cosine transform. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot, Leachim, that was invented by Michael J. Freeman. Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach. It was tested in a fourth grade classroom in the Bronx, New York. Domain-specific synthesis Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings. Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the "r" in words like "clear" is usually only pronounced when the following word has a vowel as its first letter (e.g. "clear out" is realized as ). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive. Formant synthesis Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using additive synthesis and an acoustic model (physical modelling synthesis). Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components. Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice. Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines and in many Atari, Inc. arcade games using the TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces. Articulatory synthesis Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model". More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation. HMM-based synthesis HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum (vocal tract), fundamental frequency (voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion. Sinewave synthesis Sinewave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles. Deep learning-based synthesis Deep learning speech synthesis uses deep neural networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text. 15.ai uses a multi-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context. The deep learning model used by the application is nondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering the emotion of a generated line using emotional contextualizers (a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference. ElevenLabs is primarily known for its browser-based, AI-assisted text-to-speech software, Speech Synthesis, which can produce lifelike speech by synthesizing vocal emotion and intonation. The company states its software is built to adjust the intonation and pacing of delivery based on the context of language input used. It uses advanced algorithms to analyze the contextual aspects of text, aiming to detect emotions like anger, sadness, happiness, or alarm, which enables the system to understand the user's sentiment, resulting in a more realistic and human-like inflection. Other features include multilingual speech generation and long-form content creation with contextually-aware voices. The DNN-based speech synthesizers are approaching the naturalness of the human voice. Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models. For tonal languages, such as Chinese or Taiwanese language, there are different levels of tone sandhi required and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi. Audio deepfakes In 2023, VICE reporter Joseph Cox published findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank's voice-authentication system. Challenges Text normalization challenges The process of normalizing text is rarely straightforward. Texts are full of heteronyms, numbers, and abbreviations that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project". Most text-to-speech (TTS) systems do not generate semantic representations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like examining neighboring words and using statistics about frequency of occurrence. Recently TTS systems have begun to use HMMs (discussed above) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training corpora is frequently difficult in these languages. Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous. Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight". Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant". Text-to-phoneme challenges Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or grapheme-to-phoneme conversion (phoneme is the term used by linguists to describe distinctive sounds in a language). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciations is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", or synthetic phonics, approach to learning reading. Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced .) As a result, nearly all speech synthesis systems use a combination of these approaches. Languages with a phonemic orthography have a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that are not in their dictionaries. Evaluation challenges The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities. Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset. Prosodics and emotional content A study in the journal Speech Communication by Amy Drahota and colleagues at the University of Portsmouth, UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling. It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of the pitch contour of the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification uses discrete cosine transform in the source domain (linear prediction residual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamic plosion index applied on the integrated linear prediction residual of the voiced regions of speech. In general, prosody remains a challenge for speech synthesizers, and is an active research topic. Dedicated hardware Icophone General Instrument SP0256-AL2 National Semiconductor DT1050 Digitalker (Mozer – Forrest Mozer) Texas Instruments LPC Speech Chips Hardware and software systems Popular systems offering speech synthesis as a built-in capability. Texas Instruments In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion included Alpiner and Parsec). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan. Mattel The Mattel Intellivision game console offered the Intellivoice Voice Synthesis module in 1982. It included the SP0256 Narrator speech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples. SAM Also released in 1982, Software Automatic Mouth was the first commercial all-software voice synthesis program. It was later used as the basis for Macintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip. Atari Arguably, the first speech system integrated into an operating system was the circa 1983 unreleased Atari 1400XL/1450XL computers. These used the Votrax SC01 chip and a finite-state machine to enable World English Spelling text-to-speech synthesis. The Atari ST computers were sold with "stspeech.tos" on floppy disk. Apple The first speech system integrated into an operating system that shipped in quantity was Apple Computer's MacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with. So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introduced speech recognition into its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of Apple Macintosh has evolved into a fully supported program, PlainTalk, for people with vision problems. VoiceOver was for the first time featured in 2005 in Mac OS X Tiger (10.4). During 10.4 (Tiger) and first releases of 10.5 (Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includes say, a command-line based application that converts text to audible speech. The AppleScript Standard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text. Amazon Used in Alexa and as Software as a Service in AWS (from 2017). AmigaOS The second operating system to feature advanced speech synthesis capabilities was AmigaOS, introduced in 1985. The voice synthesis was licensed by Commodore International from SoftVoice, Inc., who also developed the original MacinTalk text-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through the Amiga's audio chipset. The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward. Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language. Microsoft Windows Modern Windows desktop systems can use SAPI 4 and SAPI 5 components to support speech synthesis and speech recognition. SAPI 4.0 was available as an optional add-on for Windows 95 and Windows 98. Windows 2000 added Narrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly. Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard. Microsoft Speech Server is a server-based package for voice synthesis and recognition. It is designed for network use with web applications and call centers. Votrax From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind. Text-to-speech systems Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers. Android Version 1.6 of Android added support for speech synthesis (TTS). Internet Currently, there are a number of applications, plugins and gadgets that can read messages directly from an e-mail client and web pages from a web browser or Google Toolbar. Some specialized software can narrate RSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them to podcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help of podcast receiver, and listen to them while walking, jogging or commuting to work. A growing field in Internet based TTS is web-based assistive technology, e.g. 'Browsealoud' from a UK company and Readspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit project Pediaphon was created in 2006 to provide a similar web-based TTS interface to the Wikipedia. Other work is being done in the context of the W3C through the W3C Audio Incubator Group with the involvement of The BBC and Google Inc. Open source Some open-source software systems are available, such as: eSpeak which supports a broad range of languages. Festival Speech Synthesis System which uses diphone-based synthesis, as well as more modern and better-sounding techniques. gnuspeech which uses articulatory synthesis from the Free Software Foundation. Others Following the commercial failure of the hardware-based Intellivoice, gaming developers sparingly used software synthesis in later games. Earlier systems from Atari, such as the Atari 5200 (Baseball) and the Atari 2600 (Quadrun and Open Sesame), also had games utilizing software synthesis. Some e-book readers, such as the Amazon Kindle, Samsung E6, PocketBook eReader Pro, enTourage eDGe, and the Bebook Neo. The BBC Micro incorporated the Texas Instruments TMS5220 speech synthesis chip. Some models of Texas Instruments home computers produced in 1979 and 1981 (Texas Instruments TI-99/4 and TI-99/4A) were capable of text-to-phoneme synthesis or reciting complete words and phrases (text-to-dictionary), using a very popular Speech Synthesizer peripheral. TI used a proprietary codec to embed complete spoken phrases into applications, primarily video games. IBM's OS/2 Warp 4 included VoiceType, a precursor to IBM ViaVoice. GPS Navigation units produced by Garmin, Magellan, TomTom and others use speech synthesis for automobile navigation. Yamaha produced a music synthesizer in 1999, the Yamaha FS1R which included a Formant synthesis capability. Sequences of up to 512 individual vowel and consonant formants could be stored and replayed, allowing short vocal phrases to be synthesized. Digital sound-alikes At the 2018 Conference on Neural Information Processing Systems (NeurIPS) researchers from Google presented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', which transfers learning from speaker verification to achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds. Also researchers from Baidu Research presented a voice cloning system with similar aims at the 2018 NeurIPS conference, though the result is rather unconvincing. By 2019 the digital sound-alikes found their way to the hands of criminals as Symantec researchers know of 3 cases where digital sound-alikes technology has been used for crime. This increases the stress on the disinformation situation coupled with the facts that Human image synthesis since the early 2000s has improved beyond the point of human's inability to tell a real human imaged with a real camera from a simulation of a human imaged with a simulation of a camera. 2D video forgery techniques were presented in 2016 that allow near real-time counterfeiting of facial expressions in existing 2D video. In SIGGRAPH 2017 an audio driven digital look-alike of upper torso of Barack Obama was presented by researchers from University of Washington. It was driven only by a voice track as source data for the animation after the training phase to acquire lip sync and wider facial information from training material consisting of 2D videos with audio had been completed. In March 2020, a freeware web application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released. Initial characters included GLaDOS from Portal, Twilight Sparkle and Fluttershy from the show My Little Pony: Friendship Is Magic, and the Tenth Doctor from Doctor Who. Speech synthesis markup languages A number of markup languages have been established for the rendition of text as speech in an XML-compliant format. The most recent is Speech Synthesis Markup Language (SSML), which became a W3C recommendation in 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) and SABLE. Although each of these was proposed as a standard, none of them have been widely adopted. Speech synthesis markup languages are distinguished from dialogue markup languages. VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup. Applications Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use of screen readers for people with visual impairment, but text-to-speech systems are now commonly used by people with dyslexia and other reading disabilities as well as by pre-literate children. They are also frequently employed to aid those with severe speech impairment usually through a dedicated voice output communication aid. Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available. A noted application, of speech synthesis, was the Kurzweil Reading Machine for the Blind which incorporated text-to-phonetics software based on work from Haskins Laboratories and a black-box synthesizer built by Votrax. Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications. The application reached maturity in 2008, when NEC Biglobe announced a web service that allows users to create phrases from the voices of characters from the Japanese anime series Code Geass: Lelouch of the Rebellion R2. 15.ai has been frequently used for content creation in various fandoms, including the My Little Pony: Friendship Is Magic fandom, the Team Fortress 2 fandom, the Portal fandom, and the SpongeBob SquarePants fandom. Text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined with speech recognition allows for interaction with mobile devices via natural language processing interfaces. Some users have also created AI virtual assistants using 15.ai and external voice control software. Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media. Content creators have used voice cloning tools to recreate their voices for podcasts, narration, and comedy shows. Publishers and authors have also used such software to narrate audiobooks and newsletters. Another area of application is AI video creation with talking heads. Webapps and video editors like Elai.io or Synthesia allow users to create video content involving AI avatars, who are made to speak using text-to-speech technology. Speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. A voice quality synthesizer, developed by Jorge C. Lucero et al. at the University of Brasília, simulates the physics of phonation and includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries. The synthesizer has been used to mimic the timbre of dysphonic speakers with controlled levels of roughness, breathiness and strain. Singing synthesis
Technology
Media and communication
null
42806
https://en.wikipedia.org/wiki/Cyclone
Cyclone
In meteorology, a cyclone () is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone). Cyclones are characterized by inward-spiraling winds that rotate about a zone of low pressure. The largest low-pressure systems are polar vortices and extratropical cyclones of the largest scale (the synoptic scale). Warm-core cyclones such as tropical cyclones and subtropical cyclones also lie within the synoptic scale. Mesocyclones, tornadoes, and dust devils lie within the smaller mesoscale. Upper level cyclones can exist without the presence of a surface low, and can pinch off from the base of the tropical upper tropospheric trough during the summer months in the Northern Hemisphere. Cyclones have also been seen on extraterrestrial planets, such as Mars, Jupiter, and Neptune. Cyclogenesis is the process of cyclone formation and intensification. Extratropical cyclones begin as waves in large regions of enhanced mid-latitude temperature contrasts called baroclinic zones. These zones contract and form weather fronts as the cyclonic circulation closes and intensifies. Later in their life cycle, extratropical cyclones occlude as cold air masses undercut the warmer air and become cold core systems. A cyclone's track is guided over the course of its 2 to 6 day life cycle by the steering flow of the subtropical jet stream. Weather fronts mark the boundary between two masses of air of different temperature, humidity, and densities, and are associated with the most prominent meteorological phenomena. Strong cold fronts typically feature narrow bands of thunderstorms and severe weather, and may on occasion be preceded by squall lines or dry lines. Such fronts form west of the circulation center and generally move from west to east; warm fronts form east of the cyclone center and are usually preceded by stratiform precipitation and fog. Warm fronts move poleward ahead of the cyclone path. Occluded fronts form late in the cyclone life cycle near the center of the cyclone and often wrap around the storm center. Tropical cyclogenesis describes the process of development of tropical cyclones. Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm core. Cyclones can transition between extratropical, subtropical, and tropical phases. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In the Atlantic and the northeastern Pacific oceans, a tropical cyclone is generally referred to as a hurricane (from the name of the ancient Central American deity of wind, Huracan), in the Indian and south Pacific oceans it is called a cyclone, and in the northwestern Pacific it is called a typhoon. The growth of instability in the vortices is not universal. For example, the size, intensity, moist-convection, surface evaporation, the value of potential temperature at each potential height can affect the nonlinear evolution of a vortex. Nomenclature Henry Piddington published 40 papers dealing with tropical storms from Calcutta between 1836 and 1855 in The Journal of the Asiatic Society. He also coined the term cyclone, meaning the coil of a snake. In 1842, he published his landmark thesis, Laws of the Storms. Structure There are a number of structural characteristics common to all cyclones. A cyclone is a low-pressure area. A cyclone's center (often known in a mature tropical cyclone as the eye), is the area of lowest atmospheric pressure in the region. Near the center, the pressure gradient force (from the pressure in the center of the cyclone compared to the pressure outside the cyclone) and the force from the Coriolis effect must be in an approximate balance, or the cyclone would collapse on itself as a result of the difference in pressure. Because of the Coriolis effect, the wind flow around a large cyclone is counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. In the Northern Hemisphere, the fastest winds relative to the surface of the Earth therefore occur on the eastern side of a northward-moving cyclone and on the northern side of a westward-moving one; the opposite occurs in the Southern Hemisphere. In contrast to low-pressure systems, the wind flow around high-pressure systems are clockwise (anticyclonic) in the northern hemisphere, and counterclockwise in the southern hemisphere. Formation Cyclogenesis is the development or strengthening of cyclonic circulation in the atmosphere. Cyclogenesis is an umbrella term for several different processes that all result in the development of some sort of cyclone. It can occur at various scales, from the microscale to the synoptic scale. Extratropical cyclones begin as waves along weather fronts before occluding later in their life cycle as cold-core systems. However, some intense extratropical cyclones can become warm-core systems when a warm seclusion occurs. Tropical cyclones form as a result of significant convective activity, and are warm core. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. Cyclolysis is the opposite of cyclogenesis, and is the high-pressure system equivalent, which deals with the formation of high-pressure areas—Anticyclogenesis. A surface low can form in a variety of ways. Topography can create a surface low. Mesoscale convective systems can spawn surface lows that are initially warm-core. The disturbance can grow into a wave-like formation along the front and the low is positioned at the crest. Around the low, the flow becomes cyclonic. This rotational flow moves polar air towards the equator on the west side of the low, while warm air move towards the pole on the east side. A cold front appears on the west side, while a warm front forms on the east side. Usually, the cold front moves at a quicker pace than the warm front and "catches up" with it due to the slow erosion of higher density air mass out ahead of the cyclone. In addition, the higher density air mass sweeping in behind the cyclone strengthens the higher pressure, denser cold air mass. The cold front over takes the warm front, and reduces the length of the warm front. At this point an occluded front forms where the warm air mass is pushed upwards into a trough of warm air aloft, which is also known as a trowal. Tropical cyclogenesis is the development and strengthening of a tropical cyclone. The mechanisms by which tropical cyclogenesis occurs are distinctly different from those that produce mid-latitude cyclones. Tropical cyclogenesis, the development of a warm-core cyclone, begins with significant convection in a favorable atmospheric environment. There are six main requirements for tropical cyclogenesis: sufficiently warm sea surface temperatures, atmospheric instability, high humidity in the lower to middle levels of the troposphere enough Coriolis force to develop a low-pressure center a preexisting low-level focus or disturbance low vertical wind shear. An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane/typhoon strength, and 20 becoming intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson hurricane scale). Synoptic scale The following types of cyclones are identifiable in synoptic charts. Surface-based types There are three main types of surface-based cyclones: Extratropical cyclones, Subtropical cyclones and Tropical cyclones Extratropical cyclone An extratropical cyclone is a synoptic scale low-pressure weather system that does not have tropical characteristics, as it is connected with fronts and horizontal gradients (rather than vertical) in temperature and dew point otherwise known as "baroclinic zones". "Extratropical" is applied to cyclones outside the tropics, in the middle latitudes. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" when a tropical cyclone has moved (extratropical transition) beyond the tropics. They are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena that, along with anticyclones, drive weather over much of the Earth. Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient within the westerlies, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform with radius. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters sufficient to warm its core, and as a result develops central convection. A particularly intense type of extratropical cyclone that strikes during winter is known colloquially as a nor'easter. Polar low A polar low is a small-scale, short-lived atmospheric low-pressure system (depression) that is found over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. Polar lows were first identified on the meteorological satellite imagery that became available in the 1960s, which revealed many small-scale cloud vortices at high latitudes. The most active polar lows are found over certain ice-free maritime areas in or near the Arctic during the winter, such as the Norwegian Sea, Barents Sea, Labrador Sea and Gulf of Alaska. Polar lows dissipate rapidly when they make landfall. Antarctic systems tend to be weaker than their northern counterparts since the air-sea temperature differences around the continent are generally smaller . However, vigorous polar lows can be found over the Southern Ocean. During winter, when cold-core lows with temperatures in the mid-levels of the troposphere reach move over open waters, deep convection forms, which allows polar low development to become possible. The systems usually have a horizontal length scale of less than and exist for no more than a couple of days. They are part of the larger class of mesoscale weather systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and gas and oil platforms. Polar lows have been referred to by many other terms, such as polar mesoscale vortex, Arctic hurricane, Arctic low, and cold air depression. Today the term is usually reserved for the more vigorous systems that have near-surface winds of at least 17 m/s. Subtropical A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. As early as the 1950s, meteorologists were unclear whether they should be characterized as tropical cyclones or extratropical cyclones, and used terms such as quasi-tropical and semi-tropical to describe the cyclone hybrids. By 1972, the National Hurricane Center officially recognized this cyclone category. Subtropical cyclones began to receive names off the official tropical cyclone list in the Atlantic Basin in 2002. They have broad wind patterns with maximum sustained winds located farther from the center than typical tropical cyclones, and exist in areas of weak to moderate temperature gradient. Since they form from extratropical cyclones, which have colder temperatures aloft than normally found in the tropics, the sea surface temperatures required is around 23 degrees Celsius (73 °F) for their formation, which is three degrees Celsius (5 °F) lower than for tropical cyclones. This means that subtropical cyclones are more likely to form outside the traditional bounds of the hurricane season. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm. Tropical A tropical cyclone is a storm system characterized by a low-pressure center and numerous thunderstorms that produce strong winds and flooding rain. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapour contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems. The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their dependence on Maritime Tropical air masses for their formation. The term "cyclone" refers to the storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on their location and strength, tropical cyclones are referred to by other names, such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, or simply as a cyclone. While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and a damaging storm surge. Their winds increase the wave size, and in so doing they draw more heat and moisture into their system, thereby increasing their strength. They develop over large bodies of warm water, and hence lose their strength if they move over land. This is the reason coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from strong winds. Heavy rains, however, can produce significant flooding inland. Storm surges are rises in sea level caused by the reduced pressure of the core that in effect "sucks" the water upward and from winds that in effect "pile" the water up. Storm surges can produce extensive coastal flooding up to from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere. Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses. A tropical cyclone is usually not considered to become subtropical during its extratropical transition. Upper level types Polar cyclone A polar, sub-polar, or Arctic cyclone (also known as a polar vortex) is a vast area of low pressure that strengthens in the winter and weakens in the summer. A polar cyclone is a low-pressure weather system, usually spanning to , in which the air circulates in a counterclockwise direction in the northern hemisphere, and a clockwise direction in the southern hemisphere. The Coriolis acceleration acting on the air masses moving poleward at high altitude, causes a counterclockwise circulation at high altitude. The poleward movement of air originates from the air circulation of the Polar cell. The polar low is not driven by convection as are tropical cyclones, nor the cold and warm air mass interactions as are extratropical cyclones, but is an artifact of the global air movement of the Polar cell. The base of the polar low is in the mid to upper troposphere. In the Northern Hemisphere, the polar cyclone has two centers on average. One center lies near Baffin Island and the other over northeast Siberia. In the southern hemisphere, it tends to be located near the edge of the Ross ice shelf near 160 west longitude. When the polar vortex is strong, its effect can be felt at the surface as a westerly wind (toward the east). When the polar cyclone is weak, significant cold outbreaks occur. TUTT cell Under specific circumstances, upper level cold lows can break off from the base of the tropical upper tropospheric trough (TUTT), which is located mid-ocean in the Northern Hemisphere during the summer months. These upper tropospheric cyclonic vortices, also known as TUTT cells or TUTT lows, usually move slowly from east-northeast to west-southwest, and their bases generally do not extend below in altitude. A weak inverted surface trough within the trade wind is generally found underneath them, and they may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus clouds and the appearance of a surface vortex. In rare cases, they become warm-core tropical cyclones. Upper cyclones and the upper troughs that trail tropical cyclones can cause additional outflow channels and aid in their intensification. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet emanating from the developing tropical disturbance/cyclone. Mesoscale The following types of cyclones are not identifiable in synoptic charts. Mesocyclone A mesocyclone is a vortex of air, to in diameter (the mesoscale of meteorology), within a convective storm. Air rises and rotates around a vertical axis, usually in the same direction as low-pressure systems in both northern and southern hemisphere. They are most often cyclonic, that is, associated with a localized low-pressure region within a supercell. Such storms can feature strong surface winds and severe hail. Mesocyclones often occur together with updrafts in supercells, where tornadoes may form. About 1,700 mesocyclones form annually across the United States, but only half produce tornadoes. Tornado A tornado is a violently rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. Also referred to as twisters, a colloquial term in America, or cyclones, although the word cyclone is used in meteorology, in a wider sense, to name any closed low-pressure circulation. Dust devil A dust devil is a strong, well-formed, and relatively long-lived whirlwind, ranging from small (half a metre wide and a few metres tall) to large (more than 10 metres wide and more than 1000 metres tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property. Waterspout A waterspout is a columnar vortex forming over water that is, in its most common form, a non-supercell tornado over water that is connected to a cumuliform cloud. While it is often weaker than most of its land counterparts, stronger versions spawned by mesocyclones do occur. Steam devil A gentle vortex over calm water or wet land made visible by rising water vapour. Fire whirl A fire whirl – also colloquially known as a fire devil, fire tornado, firenado, or fire twister – is a whirlwind induced by a fire and often made up of flame or ash. Other planets Cyclones are not unique to Earth. Cyclonic storms are common on giant planets, such as the Small Dark Spot on Neptune. It is about one third the diameter of the Great Dark Spot and received the nickname "Wizard's Eye" because it looks like an eye. This appearance is caused by a white cloud in the middle of the Wizard's Eye. Mars has also exhibited cyclonic storms. Jovian storms like the Great Red Spot are usually mistakenly named as giant hurricanes or cyclonic storms. However, this is inaccurate, as the Great Red Spot is, in fact, the inverse phenomenon, an anticyclone.
Physical sciences
Atmospheric circulation
null
42852
https://en.wikipedia.org/wiki/Radio%20frequency
Radio frequency
Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around to around . This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies, and also encompasses the microwave range. These are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves, so they are used in radio technology, among other uses. Different sources specify different upper and lower bounds for the frequency range. Electric current Electric currents that oscillate at radio frequencies (RF currents) have special properties not shared by direct current or lower audio frequency alternating current, such as the 50 or 60 Hz current used in electrical power distribution. Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology. RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect. RF currents applied to the body often do not cause the painful sensation and muscular contraction of electric shock that lower frequency currents produce. This is because the current changes direction too quickly to trigger depolarization of nerve membranes. However, this does not mean RF currents are harmless; they can cause internal injury as well as serious superficial burns called RF burns. RF current can ionize air, creating a conductive path through it. This property is exploited by "high frequency" units used in electric arc welding, which use currents at higher frequencies than power distribution uses. Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. This is because capacitive reactance in a circuit decreases with increasing frequency. In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with increasing frequency. When conducted by an ordinary electric cable, RF current has a tendency to reflect from discontinuities in the cable, such as connectors, and travel back down the cable toward the source, causing a condition called standing waves. RF current may be carried efficiently over transmission lines such as coaxial cables. Frequency bands The radio spectrum of frequencies is divided into bands with conventional names designated by the International Telecommunication Union (ITU): {| class="wikitable" style="text-align:right" |- ! scope="col" rowspan="2" | Frequencyrange !! scope="col" rowspan="2" | Wavelengthrange !! scope="col" colspan="2" | ITU designation !! scope="col" rowspan="2" | IEEE bands |- ! scope="col" | Full name ! scope="col" | Abbreviation |- ! scope="row" | Below 3 Hz | >105 km || || style="text-align:center" | || |- ! scope="row" | 3–30 Hz | 105–104 km|| Extremely low frequency || style="text-align:center" | ELF || |- ! scope="row" | 30–300 Hz | 104–103 km|| Super low frequency || style="text-align:center" | SLF || |- ! scope="row" | 300–3000 Hz | 103–100 km|| Ultra low frequency || style="text-align:center" | ULF || |- ! scope="row" | 3–30 kHz | 100–10 km|| Very low frequency || style="text-align:center" | VLF || |- ! scope="row" | 30–300 kHz | 10–1 km|| Low frequency || style="text-align:center" | LF || |- ! scope="row" | 300 kHz – 3 MHz | 1 km – 100 m|| Medium frequency || style="text-align:center" | MF || |- ! scope="row" | 3–30 MHz | 100–10 m|| High frequency || style="text-align:center" | HF || style="text-align:center" | HF |- ! scope="row" | 30–300 MHz | 10–1 m|| Very high frequency || style="text-align:center" | VHF || style="text-align:center" | VHF |- ! scope="row" | 300 MHz – 3 GHz | 1 m – 100 mm|| Ultra high frequency || style="text-align:center" | UHF || style="text-align:center" | UHF, L, S |- ! scope="row" | 3–30 GHz | 100–10 mm|| Super high frequency || style="text-align:center" | SHF || style="text-align:center" | S, C, X, Ku, K, Ka |- ! scope="row" | 30–300 GHz | 10–1 mm|| Extremely high frequency || style="text-align:center" | EHF || style="text-align:center" | Ka, V, W, mm |- ! scope="row" | 300 GHz – 3 THz | 1 mm – 0.1 mm|| Tremendously high frequency || style="text-align:center" | THF || |- | | |} Frequencies of 1 GHz and above are conventionally called microwave, while frequencies of 30 GHz and above are designated millimeter wave. More detailed band designations are given by the standard IEEE letter- band frequency designations and the EU/NATO frequency designations. Applications Communications Radio frequencies are used in communication devices such as transmitters, receivers, computers, televisions, and mobile phones, to name a few. Radio frequencies are also applied in carrier current systems including telephony and control circuits. The MOS integrated circuit is the technology behind the current proliferation of radio frequency wireless telecommunications devices such as cellphones. Medicine Medical applications of radio frequency (RF) energy, in the form of electromagnetic waves (radio waves) or electrical currents, have existed for over 125 years, and now include diathermy, hyperthermy treatment of cancer, electrosurgery scalpels used to cut and cauterize in operations, and radiofrequency ablation. Magnetic resonance imaging (MRI) uses radio frequency fields to generate images of the human body. Non-surgical weight loss equipment Radio Frequency or RF energy is also being used in devices that are being advertised for weight loss and fat removal. The possible effects RF might have on the body and whether RF can lead to fat reduction needs further study. Currently, there are devices such as trusculpt ID, Venus Bliss and many others utilizing this type of energy alongside heat to target fat pockets in certain areas of the body. That being said, there is limited studies on how effective these devices are. Measurement Test apparatus for radio frequencies can include standard instruments at the lower end of the range, but at higher frequencies, the test equipment becomes more specialized. Mechanical oscillations While RF usually refers to electrical oscillations, mechanical RF systems are not uncommon: see mechanical filter and RF MEMS.
Physical sciences
Waves
Physics
42888
https://en.wikipedia.org/wiki/Human%20genome
Human genome
The human genome is a complete set of nucleic acid sequences for humans, encoded as the DNA within each of the 24 distinct chromosomes in the cell nucleus. A small DNA molecule is found within individual mitochondria. These are usually treated separately as the nuclear genome and the mitochondrial genome. Human genomes include both protein-coding DNA sequences and various types of DNA that does not encode proteins. The latter is a diverse category that includes DNA coding for non-translated RNA, such as that for ribosomal RNA, transfer RNA, ribozymes, small nuclear RNAs, and several types of regulatory RNAs. It also includes promoters and their associated gene-regulatory elements, DNA playing structural and replicatory roles, such as scaffolding regions, telomeres, centromeres, and origins of replication, plus large numbers of transposable elements, inserted viral DNA, non-functional pseudogenes and simple, highly repetitive sequences. Introns make up a large percentage of non-coding DNA. Some of this non-coding DNA is non-functional junk DNA, such as pseudogenes, but there is no firm consensus on the total amount of junk DNA. Although the sequence of the human genome has been completely determined by DNA sequencing in 2022 (including methylome), it is not yet fully understood. Most, but not all, genes have been identified by a combination of high throughput experimental and bioinformatics approaches, yet much work still needs to be done to further elucidate the biological functions of their protein and RNA products. Size of the human genome In 2000, scientists reported the sequencing of 88% of human genome, but as of 2020, at least 8% was still missing. In 2021, scientists reported sequencing a complete, female genome (i.e., without the Y chromosome). The human Y chromosome, consisting of 62,460,029 base pairs from a different cell line and found in all males, was sequenced completely in January 2022. The current version of the standard reference genome is called GRCh38.p14 (July 2023). It consists of 22 autosomes plus one copy of the X chromosome and one copy of the Y chromosome. It contains approximately 3.1 billion base pairs (3.1 Gb or 3.1 x 109 bp). This represents the size of a composite genome based on data from multiple individuals but it is a good indication of the typical amount of DNA in a haploid set of chromosomes because the Y chromosome is quite small. Most human cells are diploid so they contain twice as much DNA (~6.2 billion base pairs). In 2023, a draft human pangenome reference was published. It is based on 47 genomes from persons of varied ethnicity. Plans are underway for an improved reference capturing still more biodiversity from a still wider sample. While there are significant differences among the genomes of human individuals (on the order of 0.1% due to single-nucleotide variants and 0.6% when considering indels), these are considerably smaller than the differences between humans and their closest living relatives, the bonobos and chimpanzees (~1.1% fixed single-nucleotide variants and 4% when including indels). Molecular organization and gene content The total length of the human reference genome does not represent the sequence of any specific individual, nor does it represent the sequence of all of the DNA found within a cell. The human reference genome only includes one copy of each of the paired, homologous autosomes plus one copy of each of the two sex chromosomes (X and Y). The total amount of DNA in this reference genome is 3.1 billion base pairs (3.1 Gb). Protein-coding genes Protein-coding sequences represent the most widely studied and best understood component of the human genome. These sequences ultimately lead to the production of all human proteins, although several biological processes (e.g. DNA rearrangements and alternative pre-mRNA splicing) can lead to the production of many more unique proteins than the number of protein-coding genes. The human reference genome contains somewhere between 19,000 and 20,000 protein-coding genes. These genes contain an average of 10 introns and the average size of an intron is about 6 kb (6,000 bp). This means that the average size of a protein-coding gene is about 62 kb and these genes take up about 40% of the genome. Exon sequences consist of coding DNA and untranslated regions (UTRs) at either end of the mature mRNA. The total amount of coding DNA is about 1-2% of the genome. Many people divide the genome into coding and non-coding DNA based on the idea that coding DNA is the most important functional component of the genome. About 98-99% of the human genome is non-coding DNA. Non-coding genes Noncoding RNA molecules play many essential roles in cells, especially in the many reactions of protein synthesis and RNA processing. Noncoding genes include those for tRNAs, ribosomal RNAs, microRNAs, snRNAs and long non-coding RNAs (lncRNAs). The number of reported non-coding genes continues to rise slowly but the exact number in the human genome is yet to be determined. Many RNAs are thought to be non-functional. Many ncRNAs are critical elements in gene regulation and expression. Noncoding RNA also contributes to epigenetics, transcription, RNA splicing, and the translational machinery. The role of RNA in genetic regulation and disease offers a new potential level of unexplored genomic complexity. Pseudogenes Pseudogenes are inactive copies of protein-coding genes, often generated by gene duplication, that have become nonfunctional through the accumulation of inactivating mutations. The number of pseudogenes in the human genome is on the order of 13,000, and in some chromosomes is nearly the same as the number of functional protein-coding genes. Gene duplication is a major mechanism through which new genetic material is generated during molecular evolution. For example, the olfactory receptor gene family is one of the best-documented examples of pseudogenes in the human genome. More than 60 percent of the genes in this family are non-functional pseudogenes in humans. By comparison, only 20 percent of genes in the mouse olfactory receptor gene family are pseudogenes. Research suggests that this is a species-specific characteristic, as the most closely related primates all have proportionally fewer pseudogenes. This genetic discovery helps to explain the less acute sense of smell in humans relative to other mammals. Regulatory DNA sequences The human genome has many different regulatory sequences which are crucial to controlling gene expression. Conservative estimates indicate that these sequences make up 8% of the genome, however extrapolations from the ENCODE project give that 20 or more of the genome is gene regulatory sequence. Some types of non-coding DNA are genetic "switches" that do not encode proteins, but do regulate when and where genes are expressed (called enhancers). Regulatory sequences have been known since the late 1960s. The first identification of regulatory sequences in the human genome relied on recombinant DNA technology. Later with the advent of genomic sequencing, the identification of these sequences could be inferred by evolutionary conservation. The evolutionary branch between the primates and mouse, for example, occurred 70–90 million years ago. So computer comparisons of gene sequences that identify conserved non-coding sequences will be an indication of their importance in duties such as gene regulation. Other genomes have been sequenced with the same intention of aiding conservation-guided methods, for exampled the pufferfish genome. However, regulatory sequences disappear and re-evolve during evolution at a high rate. As of 2012, the efforts have shifted toward finding interactions between DNA and regulatory proteins by the technique ChIP-Seq, or gaps where the DNA is not packaged by histones (DNase hypersensitive sites), both of which tell where there are active regulatory sequences in the investigated cell type. Repetitive DNA sequences Repetitive DNA sequences comprise approximately 50% of the human genome. About 8% of the human genome consists of tandem DNA arrays or tandem repeats, low complexity repeat sequences that have multiple adjacent copies (e.g. "CAGCAGCAG..."). The tandem sequences may be of variable lengths, from two nucleotides to tens of nucleotides. These sequences are highly variable, even among closely related individuals, and so are used for genealogical DNA testing and forensic DNA analysis. Repeated sequences of fewer than ten nucleotides (e.g. the dinucleotide repeat (AC)n) are termed microsatellite sequences. Among the microsatellite sequences, trinucleotide repeats are of particular importance, as sometimes occur within coding regions of genes for proteins and may lead to genetic disorders. For example, Huntington's disease results from an expansion of the trinucleotide repeat (CAG)n within the Huntingtin gene on human chromosome 4. Telomeres (the ends of linear chromosomes) end with a microsatellite hexanucleotide repeat of the sequence (TTAGGG)n. Tandem repeats of longer sequences (arrays of repeated sequences 10–60 nucleotides long) are termed minisatellites. Transposable genetic elements, DNA sequences that can replicate and insert copies of themselves at other locations within a host genome, are an abundant component in the human genome. The most abundant transposon lineage, Alu, has about 50,000 active copies, and can be inserted into intragenic and intergenic regions. One other lineage, LINE-1, has about 100 active copies per genome (the number varies between people). Together with non-functional relics of old transposons, they account for over half of total human DNA. Sometimes called "jumping genes", transposons have played a major role in sculpting the human genome. Some of these sequences represent endogenous retroviruses, DNA copies of viral sequences that have become permanently integrated into the genome and are now passed on to succeeding generations. There are also a significant number of retroviruses in human DNA, at least 3 of which have been proven to possess an important function (i.e., HIV-like functional HERV-K; envelope genes of non-functional viruses HERV-W and HERV-FRD play a role in placenta formation by inducing cell-cell fusion). Mobile elements within the human genome can be classified into LTR retrotransposons (8.3% of total genome), SINEs (13.1% of total genome) including Alu elements, LINEs (20.4% of total genome), SVAs (SINE-VNTR-Alu) and Class II DNA transposons (2.9% of total genome). Junk DNA There is no consensus on what constitutes a "functional" element in the genome since geneticists, evolutionary biologists, and molecular biologists employ different definitions and methods. Due to the ambiguity in the terminology, different schools of thought have emerged. In evolutionary definitions, "functional" DNA, whether it is coding or non-coding, contributes to the fitness of the organism, and therefore is maintained by negative evolutionary pressure whereas "non-functional" DNA has no benefit to the organism and therefore is under neutral selective pressure. This type of DNA has been described as junk DNA. In genetic definitions, "functional" DNA is related to how DNA segments manifest by phenotype and "nonfunctional" is related to loss-of-function effects on the organism. In biochemical definitions, "functional" DNA relates to DNA sequences that specify molecular products (e.g. noncoding RNAs) and biochemical activities with mechanistic roles in gene or genome regulation (i.e. DNA sequences that impact cellular level activity such as cell type, condition, and molecular processes). There is no consensus in the literature on the amount of functional DNA since, depending on how "function" is understood, ranges have been estimated from up to 90% of the human genome is likely nonfunctional DNA (junk DNA) to up to 80% of the genome is likely functional. It is also possible that junk DNA may acquire a function in the future and therefore may play a role in evolution, but this is likely to occur only very rarely. Finally DNA that is deliterious to the organism and is under negative selective pressure is called garbage DNA. Sequencing The first human genome sequences were published in nearly complete draft form in February 2001 by the Human Genome Project and Celera Corporation. Completion of the Human Genome Project's sequencing effort was announced in 2004 with the publication of a draft genome sequence, leaving just 341 gaps in the sequence, representing highly repetitive and other DNA that could not be sequenced with the technology available at the time. The human genome was the first of all vertebrates to be sequenced to such near-completion, and as of 2018, the diploid genomes of over a million individual humans had been determined using next-generation sequencing. These data are used worldwide in biomedical science, anthropology, forensics and other branches of science. Such genomic studies have led to advances in the diagnosis and treatment of diseases, and to new insights in many fields of biology, including human evolution. By 2018, the total number of genes had been raised to at least 46,831, plus another 2300 micro-RNA genes. A 2018 population survey found another 300 million bases of human genome that was not in the reference sequence. Prior to the acquisition of the full genome sequence, estimates of the number of human genes ranged from 50,000 to 140,000 (with occasional vagueness about whether these estimates included non-protein coding genes). As genome sequence quality and the methods for identifying protein-coding genes improved, the count of recognized protein-coding genes dropped to 19,000–20,000. In 2022, the Telomere-to-Telomere (T2T) consortium reported the complete sequence of a human female genome, filling all the gaps in the X chromosome (2020) and the 22 autosomes (May 2021). The previously unsequenced parts contain immune response genes that help to adapt to and survive infections, as well as genes that are important for predicting drug response. The completed human genome sequence will also provide better understanding of human formation as an individual organism and how humans vary both between each other and other species. Although the 'completion' of the human genome project was announced in 2001, there remained hundreds of gaps, with about 5–10% of the total sequence remaining undetermined. The missing genetic information was mostly in repetitive heterochromatic regions and near the centromeres and telomeres, but also some gene-encoding euchromatic regions. There remained 160 euchromatic gaps in 2015 when the sequences spanning another 50 formerly unsequenced regions were determined. Only in 2020 was the first truly complete telomere-to-telomere sequence of a human chromosome determined, namely of the X chromosome. The first complete telomere-to-telomere sequence of a human autosomal chromosome, chromosome 8, followed a year later. The complete human genome (without Y chromosome) was published in 2021, while with Y chromosome in January 2022. In 2023, a draft human pangenome reference was published. It is based on 47 genomes from persons of varied ethnicity. Plans are underway for an improved reference capturing still more biodiversity from a still wider sample. Genomic variation in humans Human reference genome With the exception of identical twins, all humans show significant variation in genomic DNA sequences. The human reference genome (HRG) is used as a standard sequence reference. There are several important points concerning the human reference genome: The HRG is a haploid sequence. Each chromosome is represented once. The HRG is a composite sequence, and does not correspond to any actual human individual. The HRG is periodically updated to correct errors, ambiguities, and unknown "gaps". The HRG in no way represents an "ideal" or "perfect" human individual. It is simply a standardized representation or model that is used for comparative purposes. The Genome Reference Consortium is responsible for updating the HRG. Version 38 was released in December 2013. Measuring human genetic variation Most studies of human genetic variation have focused on single-nucleotide polymorphisms (SNPs), which are substitutions in individual bases along a chromosome. Most analyses estimate that SNPs occur 1 in 1000 base pairs, on average, in the euchromatic human genome, although they do not occur at a uniform density. Thus follows the popular statement that "we are all, regardless of race, genetically 99.9% the same", although this would be somewhat qualified by most geneticists. For example, a much larger fraction of the genome is now thought to be involved in copy number variation. A large-scale collaborative effort to catalog SNP variations in the human genome is being undertaken by the International HapMap Project. The genomic loci and length of certain types of small repetitive sequences are highly variable from person to person, which is the basis of DNA fingerprinting and DNA paternity testing technologies. The heterochromatic portions of the human genome, which total several hundred million base pairs, are also thought to be quite variable within the human population (they are so repetitive and so long that they cannot be accurately sequenced with current technology). These regions contain few genes, and it is unclear whether any significant phenotypic effect results from typical variation in repeats or heterochromatin. Most gross genomic mutations in gamete germ cells probably result in inviable embryos; however, a number of human diseases are related to large-scale genomic abnormalities. Down syndrome, Turner Syndrome, and a number of other diseases result from nondisjunction of entire chromosomes. Cancer cells frequently have aneuploidy of chromosomes and chromosome arms, although a cause and effect relationship between aneuploidy and cancer has not been established. Mapping human genomic variation Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome. An example of a variation map is the HapMap being developed by the International HapMap Project. The HapMap is a haplotype map of the human genome, "which will describe the common patterns of human DNA sequence variation." It catalogs the patterns of small-scale variations in the genome that involve single DNA letters, or bases. Researchers published the first sequence-based map of large-scale structural variation across the human genome in the journal Nature in May 2008. Large-scale structural variations are differences in the genome among people that range from a few thousand to a few million DNA bases; some are gains or losses of stretches of genome sequence and others appear as re-arrangements of stretches of sequence. These variations include differences in the number of copies individuals have of a particular gene, deletions, translocations and inversions. Structural variation Structural variation refers to genetic variants that affect larger segments of the human genome, as opposed to point mutations. Often, structural variants (SVs) are defined as variants of 50 base pairs (bp) or greater, such as deletions, duplications, insertions, inversions and other rearrangements. About 90% of structural variants are noncoding deletions but most individuals have more than a thousand such deletions; the size of deletions ranges from dozens of base pairs to tens of thousands of bp. On average, individuals carry ~3 rare structural variants that alter coding regions, e.g. delete exons. About 2% of individuals carry ultra-rare megabase-scale structural variants, especially rearrangements. That is, millions of base pairs may be inverted within a chromosome; ultra-rare means that they are only found in individuals or their family members and thus have arisen very recently. SNP frequency across the human genome Single-nucleotide polymorphisms (SNPs) do not occur homogeneously across the human genome. In fact, there is enormous diversity in SNP frequency between genes, reflecting different selective pressures on each gene as well as different mutation and recombination rates across the genome. However, studies on SNPs are biased towards coding regions, the data generated from them are unlikely to reflect the overall distribution of SNPs throughout the genome. Therefore, the SNP Consortium protocol was designed to identify SNPs with no bias towards coding regions and the Consortium's 100,000 SNPs generally reflect sequence diversity across the human chromosomes. The SNP Consortium aims to expand the number of SNPs identified across the genome to 300 000 by the end of the first quarter of 2001. Changes in non-coding sequence and synonymous changes in coding sequence are generally more common than non-synonymous changes, reflecting greater selective pressure reducing diversity at positions dictating amino acid identity. Transitional changes are more common than transversions, with CpG dinucleotides showing the highest mutation rate, presumably due to deamination. Personal genomes A personal genome sequence is a (nearly) complete sequence of the chemical base pairs that make up the DNA of a single person. Because medical treatments have different effects on different people due to genetic variations such as single-nucleotide polymorphisms (SNPs), the analysis of personal genomes may lead to personalized medical treatment based on individual genotypes. The first personal genome sequence to be determined was that of Craig Venter in 2007. Personal genomes had not been sequenced in the public Human Genome Project to protect the identity of volunteers who provided DNA samples. That sequence was derived from the DNA of several volunteers from a diverse population. However, early in the Venter-led Celera Genomics genome sequencing effort the decision was made to switch from sequencing a composite sample to using DNA from a single individual, later revealed to have been Venter himself. Thus the Celera human genome sequence released in 2000 was largely that of one man. Subsequent replacement of the early composite-derived data and determination of the diploid sequence, representing both sets of chromosomes, rather than a haploid sequence originally reported, allowed the release of the first personal genome. In April 2008, that of James Watson was also completed. In 2009, Stephen Quake published his own genome sequence derived from a sequencer of his own design, the Heliscope. A Stanford team led by Euan Ashley published a framework for the medical interpretation of human genomes implemented on Quake's genome and made whole genome-informed medical decisions for the first time. That team further extended the approach to the West family, the first family sequenced as part of Illumina's Personal Genome Sequencing program. Since then hundreds of personal genome sequences have been released, including those of Desmond Tutu, and of a Paleo-Eskimo. In 2012, the whole genome sequences of two family trios among 1092 genomes was made public. In November 2013, a Spanish family made four personal exome datasets (about 1% of the genome) publicly available under a Creative Commons public domain license. The Personal Genome Project (started in 2005) is among the few to make both genome sequences and corresponding medical phenotypes publicly available. The sequencing of individual genomes further unveiled levels of genetic complexity that had not been appreciated before. Personal genomics helped reveal the significant level of diversity in the human genome attributed not only to SNPs but structural variations as well. However, the application of such knowledge to the treatment of disease and in the medical field is only in its very beginnings. Exome sequencing has become increasingly popular as a tool to aid in diagnosis of genetic disease because the exome contributes only 1% of the genomic sequence but accounts for roughly 85% of mutations that contribute significantly to disease. Human knockouts In humans, gene knockouts naturally occur as heterozygous or homozygous loss-of-function gene knockouts. These knockouts are often difficult to distinguish, especially within heterogeneous genetic backgrounds. They are also difficult to find as they occur in low frequencies. Populations with high rates of consanguinity, such as countries with high rates of first-cousin marriages, display the highest frequencies of homozygous gene knockouts. Such populations include Pakistan, Iceland, and Amish populations. These populations with a high level of parental-relatedness have been subjects of human knock out research which has helped to determine the function of specific genes in humans. By distinguishing specific knockouts, researchers are able to use phenotypic analyses of these individuals to help characterize the gene that has been knocked out. Knockouts in specific genes can cause genetic diseases, potentially have beneficial effects, or even result in no phenotypic effect at all. However, determining a knockout's phenotypic effect and in humans can be challenging. Challenges to characterizing and clinically interpreting knockouts include difficulty calling of DNA variants, determining disruption of protein function (annotation), and considering the amount of influence mosaicism has on the phenotype. One major study that investigated human knockouts is the Pakistan Risk of Myocardial Infarction study. It was found that individuals possessing a heterozygous loss-of-function gene knockout for the APOC3 gene had lower triglycerides in the blood after consuming a high fat meal as compared to individuals without the mutation. However, individuals possessing homozygous loss-of-function gene knockouts of the APOC3 gene displayed the lowest level of triglycerides in the blood after the fat load test, as they produce no functional APOC3 protein. Human genetic disorders Most aspects of human biology involve both genetic (inherited) and non-genetic (environmental) factors. Some inherited variation influences aspects of our biology that are not medical in nature (height, eye color, ability to taste or smell certain compounds, etc.). Moreover, some genetic disorders only cause disease in combination with the appropriate environmental factors (such as diet). With these caveats, genetic disorders may be described as clinically defined diseases caused by genomic DNA sequence variation. In the most straightforward cases, the disorder can be associated with variation in a single gene. For example, cystic fibrosis is caused by mutations in the CFTR gene and is the most common recessive disorder in caucasian populations with over 1,300 different mutations known. Disease-causing mutations in specific genes are usually severe in terms of gene function and are rare, thus genetic disorders are similarly individually rare. However, since there are many genes that can vary to cause genetic disorders, in aggregate they constitute a significant component of known medical conditions, especially in pediatric medicine. Molecularly characterized genetic disorders are those for which the underlying causal gene has been identified. Currently there are approximately 2,200 such disorders annotated in the OMIM database. Studies of genetic disorders are often performed by means of family-based studies. In some instances, population based approaches are employed, particularly in the case of so-called founder populations such as those in Finland, French-Canada, Utah, Sardinia, etc. Diagnosis and treatment of genetic disorders are usually performed by a geneticist-physician trained in clinical/medical genetics. The results of the Human Genome Project are likely to provide increased availability of genetic testing for gene-related disorders, and eventually improved treatment. Parents can be screened for hereditary conditions and counselled on the consequences, the probability of inheritance, and how to avoid or ameliorate it in their offspring. There are many different kinds of DNA sequence variation, ranging from complete extra or missing chromosomes down to single nucleotide changes. It is generally presumed that much naturally occurring genetic variation in human populations is phenotypically neutral, i.e., has little or no detectable effect on the physiology of the individual (although there may be fractional differences in fitness defined over evolutionary time frames). Genetic disorders can be caused by any or all known types of sequence variation. To molecularly characterize a new genetic disorder, it is necessary to establish a causal link between a particular genomic sequence variant and the clinical disease under investigation. Such studies constitute the realm of human molecular genetics. With the advent of the Human Genome and International HapMap Project, it has become feasible to explore subtle genetic influences on many common disease conditions such as diabetes, asthma, migraine, schizophrenia, etc. Although some causal links have been made between genomic sequence variants in particular genes and some of these diseases, often with much publicity in the general media, these are usually not considered to be genetic disorders per se as their causes are complex, involving many different genetic and environmental factors. Thus there may be disagreement in particular cases whether a specific medical condition should be termed a genetic disorder. Additional genetic disorders of mention are Kallman syndrome and Pfeiffer syndrome (gene FGFR1), Fuchs corneal dystrophy (gene TCF4), Hirschsprung's disease (genes RET and FECH), Bardet-Biedl syndrome 1 (genes CCDC28B and BBS1), Bardet-Biedl syndrome 10 (gene BBS10), and facioscapulohumeral muscular dystrophy type 2 (genes D4Z4 and SMCHD1). Genome sequencing is now able to narrow the genome down to specific locations to more accurately find mutations that will result in a genetic disorder. Copy number variants (CNVs) and single nucleotide variants (SNVs) are also able to be detected at the same time as genome sequencing with newer sequencing procedures available, called Next Generation Sequencing (NGS). This only analyzes a small portion of the genome, around 1–2%. The results of this sequencing can be used for clinical diagnosis of a genetic condition, including Usher syndrome, retinal disease, hearing impairments, diabetes, epilepsy, Leigh disease, hereditary cancers, neuromuscular diseases, primary immunodeficiencies, severe combined immunodeficiency (SCID), and diseases of the mitochondria. NGS can also be used to identify carriers of diseases before conception. The diseases that can be detected in this sequencing include Tay-Sachs disease, Bloom syndrome, Gaucher disease, Canavan disease, familial dysautonomia, cystic fibrosis, spinal muscular atrophy, and fragile-X syndrome. The Next Genome Sequencing can be narrowed down to specifically look for diseases more prevalent in certain ethnic populations. Evolution Comparative genomics studies of mammalian genomes suggest that approximately 5% of the human genome has been conserved by evolution since the divergence of extant lineages approximately 200 million years ago, containing the vast majority of genes. The published chimpanzee genome differs from that of the human genome by 1.23% in direct sequence comparisons. Around 20% of this figure is accounted for by variation within each species, leaving only ~1.06% consistent sequence divergence between humans and chimps at shared genes. This nucleotide by nucleotide difference is dwarfed, however, by the portion of each genome that is not shared, including around 6% of functional genes that are unique to either humans or chimps. In other words, the considerable observable differences between humans and chimps may be due as much or more to genome level variation in the number, function and expression of genes rather than DNA sequence changes in shared genes. Indeed, even within humans, there has been found to be a previously unappreciated amount of copy number variation (CNV) which can make up as much as 5–15% of the human genome. In other words, between humans, there could be +/- 500,000,000 base pairs of DNA, some being active genes, others inactivated, or active at different levels. The full significance of this finding remains to be seen. On average, a typical human protein-coding gene differs from its chimpanzee ortholog by only two amino acid substitutions; nearly one third of human genes have exactly the same protein translation as their chimpanzee orthologs. A major difference between the two genomes is human chromosome 2, which is equivalent to a fusion product of chimpanzee chromosomes 12 and 13. (later renamed to chromosomes 2A and 2B, respectively). Humans have undergone an extraordinary loss of olfactory receptor genes during our recent evolution, which explains our relatively crude sense of smell compared to most other mammals. Evolutionary evidence suggests that the emergence of color vision in humans and several other primate species has diminished the need for the sense of smell. In September 2016, scientists reported that, based on human DNA genetic studies, all non-Africans in the world today can be traced to a single population that exited Africa between 50,000 and 80,000 years ago. Mitochondrial DNA The human mitochondrial DNA is of tremendous interest to geneticists, since it undoubtedly plays a role in mitochondrial disease. It also sheds light on human evolution; for example, analysis of variation in the human mitochondrial genome has led to the postulation of a recent common ancestor for all humans on the maternal line of descent (see Mitochondrial Eve). Due to the damage induced by the exposure to Reactive Oxygen Species mitochondrial DNA (mtDNA) has a more rapid rate of variation than nuclear DNA. This 20-fold higher mutation rate allows mtDNA to be used for more accurate tracing of maternal ancestry. Studies of mtDNA in populations have allowed ancient migration paths to be traced, such as the migration of Native Americans from Siberia or Polynesians from southeastern Asia. It has also been used to show that there is no trace of Neanderthal DNA in the European gene mixture inherited through purely maternal lineage. Due to the restrictive all or none manner of mtDNA inheritance, this result (no trace of Neanderthal mtDNA) would be likely unless there were a large percentage of Neanderthal ancestry, or there was strong positive selection for that mtDNA. For example, going back 5 generations, only 1 of a person's 32 ancestors contributed to that person's mtDNA, so if one of these 32 was pure Neanderthal an expected ~3% of that person's autosomal DNA would be of Neanderthal origin, yet they would have a ~97% chance of having no trace of Neanderthal mtDNA. Epigenome Epigenetics describes a variety of features of the human genome that transcend its primary DNA sequence, such as chromatin packaging, histone modifications and DNA methylation, and which are important in regulating gene expression, genome replication and other cellular processes. Epigenetic markers strengthen and weaken transcription of certain genes but do not affect the actual sequence of DNA nucleotides. DNA methylation is a major form of epigenetic control over gene expression and one of the most highly studied topics in epigenetics. During development, the human DNA methylation profile experiences dramatic changes. In early germ line cells, the genome has very low methylation levels. These low levels generally describe active genes. As development progresses, parental imprinting tags lead to increased methylation activity. Epigenetic patterns can be identified between tissues within an individual as well as between individuals themselves. Identical genes that have differences only in their epigenetic state are called epialleles. Epialleles can be placed into three categories: those directly determined by an individual's genotype, those influenced by genotype, and those entirely independent of genotype. The epigenome is also influenced significantly by environmental factors. Diet, toxins, and hormones impact the epigenetic state. Studies in dietary manipulation have demonstrated that methyl-deficient diets are associated with hypomethylation of the epigenome. Such studies establish epigenetics as an important interface between the environment and the genome.
Biology and health sciences
Genetics and taxonomy
null
42898
https://en.wikipedia.org/wiki/Anthrax
Anthrax
Anthrax is an infection caused by the bacterium Bacillus anthracis or Bacillus cereus biovar anthracis. Infection typically occurs by contact with the skin, inhalation, or intestinal absorption. Symptom onset occurs between one day and more than two months after the infection is contracted. The skin form presents with a small blister with surrounding swelling that often turns into a painless ulcer with a black center. The inhalation form presents with fever, chest pain, and shortness of breath. The intestinal form presents with diarrhea (which may contain blood), abdominal pains, nausea, and vomiting. According to the U.S. Centers for Disease Control and Prevention, the first clinical descriptions of cutaneous anthrax were given by Maret in 1752 and Fournier in 1769. Before that, anthrax had been described only in historical accounts. The German scientist Robert Koch was the first to identify Bacillus anthracis as the bacterium that causes anthrax. Anthrax is spread by contact with the bacterium's spores, which often appear in infectious animal products. Contact is by breathing or eating or through an area of broken skin. It does not typically spread directly between people. Risk factors include people who work with animals or animal products, and military personnel. Diagnosis can be confirmed by finding antibodies or the toxin in the blood or by culture of a sample from the infected site. Anthrax vaccination is recommended for people at high risk of infection. Immunizing animals against anthrax is recommended in areas where previous infections have occurred. A two-month course of antibiotics such as ciprofloxacin, levofloxacin and doxycycline after exposure can also prevent infection. If infection occurs, treatment is with antibiotics and possibly antitoxin. The type and number of antibiotics used depend on the type of infection. Antitoxin is recommended for those with widespread infection. A rare disease, human anthrax is most common in Africa and central and southern Asia. It also occurs more regularly in Southern Europe than elsewhere on the continent and is uncommon in Northern Europe and North America. Globally, at least 2,000 cases occur a year, with about two cases a year in the United States. Skin infections represent more than 95% of cases. Without treatment the risk of death from skin anthrax is 23.7%. For intestinal infection the risk of death is 25 to 75%, while respiratory anthrax has a mortality of 50 to 80%, even with treatment. Until the 20th century anthrax infections killed hundreds of thousands of people and animals each year. In herbivorous animals infection occurs when they eat or breathe in the spores while grazing. Animals may become infected by killing and/or eating infected animals. Several countries have developed anthrax as a weapon. It has been used in biowarfare and bioterrorism since 1914. In 1975, the Biological Weapons Convention prohibited the "development, production and stockpiling" of biological weapons. It has since been used in bioterrorism. Likely delivery methods of weaponized anthrax include aerial dispersal or dispersal through livestock; notable bioterrorism uses include the 2001 anthrax attacks and an incident in 1993 by the Aum Shinrikyo group. Etymology The English name comes from anthrax (), the Greek word for coal, possibly having Egyptian etymology, because of the characteristic black skin lesions people with a cutaneous anthrax infection develop. The central black eschar surrounded by vivid red skin has long been recognised as typical of the disease. The first recorded use of the word "anthrax" in English is in a 1398 translation of Bartholomaeus Anglicus's work (On the Properties of Things, 1240). Anthrax was historically known by a wide variety of names, indicating its symptoms, location, and groups considered most vulnerable to infection. They include Siberian plague, Cumberland disease, charbon, splenic fever, malignant edema, woolsorter's disease and . Signs and symptoms Skin Cutaneous anthrax, also known as hide-porter's disease, is when anthrax occurs on the skin. It is the most common (>90% of cases) and least dangerous form (low mortality with treatment, 23.7% mortality without). Cutaneous anthrax presents as a boil-like skin lesion that eventually forms an ulcer with a black center (eschar). The black eschar often shows up as a large, painless, necrotic ulcer (beginning as an irritating and itchy skin lesion or blister that is dark and usually concentrated as a black dot, somewhat resembling bread mold) at the site of infection. In general, cutaneous infections form within the site of spore penetration two to five days after exposure. Unlike bruises or most other lesions, cutaneous anthrax infections normally do not cause pain. Nearby lymph nodes may become infected, reddened, swollen, and painful. A scab forms over the lesion soon, and falls off in a few weeks. Complete recovery may take longer. Cutaneous anthrax is typically caused when B. anthracis spores enter through cuts on the skin. This form is found most commonly when humans handle infected animals and/or animal products. Injection In December 2009, an outbreak of anthrax occurred among injecting heroin users in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. It was the first documented non-occupational human anthrax outbreak in the UK since 1960. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan. Injected anthrax may have symptoms similar to cutaneous anthrax, with the exception of black areas, and may also cause infection deep into the muscle and spread faster. This can make it harder to recognise and treat. Lungs Inhalation anthrax usually develops within a week after exposure, but may take up to 2 months. During the first few days of illness, most people have fever, chills, and fatigue. These symptoms may be accompanied by cough, shortness of breath, chest pain, and nausea or vomiting, making inhalation anthrax difficult to distinguish from influenza and community-acquired pneumonia. This is often described as the prodromal period. Over the next day or so, shortness of breath, cough, and chest pain become more common, and complaints not involving the chest such as nausea, vomiting, altered mental status, sweats, and headache develop in one-third or more of people. Upper respiratory tract symptoms occur in only a quarter of people, and muscle pains are rare. Altered mental status or shortness of breath generally brings people to healthcare and marks the fulminant phase of illness. It infects the lymph nodes in the chest first, rather than the lungs themselves, a condition called hemorrhagic mediastinitis, causing bloody fluid to accumulate in the chest cavity, thereby causing shortness of breath. The second (pneumonia) stage occurs when the infection spreads from the lymph nodes to the lungs. Symptoms of the second stage develop suddenly within hours or days after the first stage. Symptoms include high fever, extreme shortness of breath, shock, and rapid death within 48 hours in fatal cases. Gastrointestinal Gastrointestinal (GI) infection is most often caused by consuming anthrax-infected meat and is characterized by diarrhea, potentially with blood, abdominal pains, acute inflammation of the intestinal tract, and loss of appetite. Occasional vomiting of blood can occur. Lesions have been found in the intestines and in the mouth and throat. After the bacterium invades the gastrointestinal system, it spreads to the bloodstream and throughout the body, while continuing to make toxins. Cause Bacteria Bacillus anthracis is a rod-shaped, Gram-positive, facultative anaerobe bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria, and put them into a mouse. The bacterium normally rests in spore form in the soil, and can survive for decades in this state. Herbivores are often infected while grazing, especially when eating rough, irritant, or spiky vegetation; the vegetation has been hypothesized to cause wounds within the gastrointestinal tract, permitting entry of the bacterial spores into the tissues. Once ingested or placed in an open wound, the bacteria begin multiplying inside the animal or human and typically kill the host within a few days or weeks. The spores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply. The production of two powerful exotoxins and lethal toxin by the bacteria causes death. Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence and the dark, nonclotting blood that oozes from the body orifices. Most anthrax bacteria inside the body after death are outcompeted and destroyed by anaerobic bacteria within minutes to hours post mortem, but anthrax vegetative bacteria that escape the body via oozing blood or opening the carcass may form hardy spores. These vegetative bacteria are not contagious. One spore forms per vegetative bacterium. The triggers for spore formation are not known, but oxygen tension and lack of nutrients may play roles. Once formed, these spores are very hard to eradicate. The infection of herbivores (and occasionally humans) by inhalation normally begins with inhaled spores being transported through the air passages into the tiny air sacs (alveoli) in the lungs. The spores are then picked up by scavenger cells (macrophages) in the lungs and transported through small vessels (lymphatics) to the lymph nodes in the central chest cavity (mediastinum). Damage caused by the anthrax spores and bacilli to the central chest cavity can cause chest pain and difficulty breathing. Once in the lymph nodes, the spores germinate into active bacilli that multiply and eventually burst the macrophages, releasing many more bacilli into the bloodstream to be transferred to the entire body. Once in the bloodstream, these bacilli release three proteins: lethal factor, edema factor, and protective antigen. The three are not toxic by themselves, but their combination is incredibly lethal to humans. Protective antigen combines with these other two factors to form lethal toxin and edema toxin, respectively. These toxins are the primary agents of tissue destruction, bleeding, and death of the host. If antibiotics are administered too late, even if the antibiotics eradicate the bacteria, some hosts still die of toxemia because the toxins produced by the bacilli remain in their systems at lethal dose levels. Exposure and transmission Anthrax can enter the human body through the intestines (gastrointestinal), lungs (pulmonary), or skin (cutaneous), and causes distinct clinical symptoms based on its site of entry. Anthrax does not usually spread from an infected human to an uninfected human. If the disease is fatal to the person's body, its mass of anthrax bacilli becomes a potential source of infection to others and special precautions should be used to prevent further contamination. Pulmonary anthrax, if left untreated, is almost always fatal. Historically, pulmonary anthrax was called woolsorters' disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare in industrialized nations. Cutaneous anthrax is the most common form of transmission but also the least dangerous of the three transmissions. Gastrointestinal anthrax is likely fatal if left untreated, but very rare.The spores of anthrax are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, including Antarctica. Disturbed grave sites of infected animals have been known to cause infection after 70 years. In one such event, a young boy died from gastrointestinal anthrax due to the thawing of reindeer corpses from 75 years before contact. Anthrax spores traveled though groundwater used for drinking and caused tens of people to be hospitalized, largely children. Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the U.S. and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to produce infection. A lethal infection is reported to result from inhalation of about 10,000–20,000 spores, though this dose varies among host species. Mechanism The lethality of the anthrax disease is due to the bacterium's two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils; and the tripartite protein toxin, called anthrax toxin, consisting of protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively. To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore. Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol. It is unknown how exactly the complex results in the death of the cell. Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling, thus inhibiting the immune response. To be specific, LF inactivates neutrophils (a type of phagocytic cell) by the process just described so they cannot phagocytose bacteria. Throughout history, lethal factor was presumed to cause macrophages to make TNF-alpha and interleukin 1 beta (IL1B). TNF-alpha is a cytokine whose primary role is to regulate immune cells, as well as to induce inflammation and apoptosis or programmed cell death. Interleukin 1 beta is another cytokine that also regulates inflammation and apoptosis. The overproduction of TNF-alpha and IL1B ultimately leads to septic shock and death. However, recent evidence indicates anthrax also targets endothelial cells that line serious cavities such as the pericardial cavity, pleural cavity, and peritoneal cavity, lymph vessels, and blood vessels, causing vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock. Diagnosis Various techniques may be used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they may grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used. All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2. B. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a "medusa head" appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme-linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test. Prevention Precautions are taken to avoid contact with the skin and any fluids exuded through natural body openings of a deceased body that is suspected of harboring anthrax. The body should be put in strict quarantine. A blood sample is collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. The body should be sealed in an airtight body bag and incinerated to prevent the transmission of anthrax spores. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though the culture of the organism is still the gold standard for diagnosis. Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations are used when handling the body. No skin, especially if it has any wounds or scratches, should be exposed. Disposable personal protective equipment is preferable, but if not available, decontamination can be achieved by autoclaving. Used disposable equipment is burned and/or buried after use. All contaminated bedding or clothing is isolated in double plastic bags and treated as biohazard waste. Respiratory equipment capable of filtering small particles, such the US National Institute for Occupational Safety and Health- and Mine Safety and Health Administration-approved high-efficiency respirator, is worn. By addressing Anthrax from a One Health perspective, we can reduce the risks of transmission and better protect both human and animal populations. The prevention of anthrax from the environmental sources like air, water, & soil is disinfection used by effective microorganisms through spraying, and bokashi mudballs mixed with effective microorganisms for the contaminated waterways. Vaccines Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine. The French scientist Louis Pasteur developed the first effective vaccine in 1881. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s. Currently administered human anthrax vaccines include acellular subunit vaccine (United States) and live vaccine (Russia) varieties. All currently used anthrax vaccines show considerable local and general reactogenicity (erythema, induration, soreness, fever) and serious adverse reactions occur in about 1% of recipients. The American product, BioThrax, is licensed by the FDA and was formerly administered in a six-dose primary series at 0, 2, 4 weeks and 6, 12, 18 months, with annual boosters to maintain immunity. In 2008, the FDA approved omitting the week-2 dose, resulting in the currently recommended five-dose series. This five-dose series is available to military personnel, scientists who work with anthrax and members of the public who do jobs which cause them to be at-risk. New second-generation vaccines currently being researched include recombinant live vaccines and recombinant subunit vaccines. In the 20th century the use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare was controversial. Antibiotics Preventive antibiotics are recommended in those who have been exposed. Early detection of sources of anthrax infection can allow preventive measures to be taken. In response to the anthrax attacks of October 2001, the United States Postal Service (USPS) installed biodetection systems (BDSs) in their large-scale mail processing facilities. BDS response plans were formulated by the USPS in conjunction with local responders including fire, police, hospitals, and public health. Employees of these facilities have been educated about anthrax, response actions, and prophylactic medication. Because of the time delay inherent in getting final verification that anthrax has been used, prophylactic antibiotic treatment of possibly exposed personnel must be started as soon as possible. Treatment Anthrax cannot be spread from person to person, except in the rare case of skin exudates from cutaneous anthrax. However, a person's clothing and body may be contaminated with anthrax spores. Effective decontamination of people can be accomplished by a thorough wash-down with antimicrobial soap and water. Wastewater is treated with bleach or another antimicrobial agent. Effective decontamination of articles can be accomplished by boiling them in water for 30 minutes or longer. Chlorine bleach is ineffective in destroying spores and vegetative cells on surfaces, though formaldehyde is effective. Burning clothing is very effective in destroying spores. After decontamination, there is no need to immunize, treat, or isolate contacts of persons ill with anthrax unless they were also exposed to the same source of infection. Antibiotics Early antibiotic treatment of anthrax is essential; delay significantly lessens chances for survival. Treatment for anthrax infection and other bacterial infections includes large doses of intravenous and oral antibiotics, such as fluoroquinolones (ciprofloxacin), doxycycline, erythromycin, vancomycin, or penicillin. FDA-approved agents include ciprofloxacin, doxycycline, and penicillin. In possible cases of pulmonary anthrax, early antibiotic prophylaxis treatment is crucial to prevent possible death. Many attempts have been made to develop new drugs against anthrax, but existing drugs are effective if treatment is started soon enough. Monoclonal antibodies In May 2009, Human Genome Sciences submitted a biologic license application (BLA, permission to market) for its new drug, raxibacumab (brand name ABthrax) intended for emergency treatment of inhaled anthrax. On 14 December 2012, the US Food and Drug Administration approved raxibacumab injection to treat inhalational anthrax. Raxibacumab is a monoclonal antibody that neutralizes toxins produced by B. anthracis. In March 2016, FDA approved a second anthrax treatment using a monoclonal antibody which neutralizes the toxins produced by B. anthracis. Obiltoxaximab is approved to treat inhalational anthrax in conjunction with appropriate antibacterial drugs, and for prevention when alternative therapies are not available or appropriate. Biologic for Drug-, Antibody- or Vaccine-resistant Anthrax Treatment of multi-drug resistant, antibody- or vaccine-resistant Anthrax is also possible. Legler, et al. showed that pegylated CapD (capsule depolymerase) could provide protection against 5 LD50 exposures to lethal Ames spores without the use of antibiotics, monoclonal antibodies, or vaccines. The CapD enzyme removes the poly-D-glutamate (PDGA) capsular material from the bacteria, rendering it susceptible to the innate immune responses. The unencapsulated bacteria can then be cleared. Prognosis Cutaneous anthrax is rarely fatal if treated, because the infection area is limited to the skin, preventing the lethal factor, edema factor, and protective antigen from entering and destroying a vital organ. Without treatment, up to 20% of cutaneous skin infection cases progress to toxemia and death. Before 2001, fatality rates for inhalation anthrax were 90%; since then, they have fallen to 45%. People that progress to the fulminant phase of inhalational anthrax nearly always die, with one case study showing a death rate of 97%. Anthrax meningoencephalitis is also nearly always fatal. Gastrointestinal anthrax infections can be treated, but usually result in fatality rates of 25% to 60%, depending upon how soon treatment commences. Injection anthrax is the rarest form of anthrax, and has only been seen to have occurred in a group of heroin injecting drug users. Animals Anthrax, a bacterial disease caused by Bacillus anthracis, can have devastating effects on animals. It primarily affects herbivores such as cattle, sheep, and goats, but a wide range of mammals, birds, and even humans can also be susceptible. Infection typically occurs through the ingestion of spores in contaminated soil or plants. Once inside the host, the spores transform into active bacteria, producing lethal toxins that lead to severe symptoms. Infected animals often exhibit high fever, rapid breathing, and convulsions, and they may succumb to the disease within hours to days. The presence of anthrax can pose significant challenges to livestock management and wildlife conservation efforts, making it a critical concern for both animal health and public health, as it can occasionally be transmitted to humans through contact with infected animals or contaminated products. Infected animals may stagger, have difficulty breathing, tremble, and finally collapse and die within a few hours. Epidemiology Globally, at least 2,000 cases occur a year. United States The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. To minimize the chance of spreading the disease, the body was transported to UCLA in a sealed plastic body bag within a sealed metal container for autopsy. Gastrointestinal anthrax is exceedingly rare in the United States, with only two cases on record. The first case was reported in 1942, according to the Centers for Disease Control and Prevention. During December 2009, the New Hampshire Department of Health and Human Services confirmed a case of gastrointestinal anthrax in an adult female. The CDC investigated the source and the possibility that it was contracted from an African drum recently used by the woman taking part in a drum circle. The woman apparently inhaled anthrax, in spore form, from the hide of the drum. She became critically ill, but with gastrointestinal anthrax rather than inhaled anthrax, which made her unique in American medical history. The building where the infection took place was cleaned and reopened to the public and the woman recovered. The New Hampshire state epidemiologist, Jodie Dionne-Odom, stated "It is a mystery. We really don't know why it happened." In 2007 two cases of cutaneous anthrax were reported in Danbury, Connecticut. The case involved a maker of traditional African-style drums who was working with a goat hide purchased from a dealer in New York City which had been previously cleared by Customs. While the hide was being scraped, a spider bite led to the spores entering the bloodstream. His son also became infected. Croatia In July 2022, dozens of cattle in a nature park in Lonjsko Polje, a flood plain by the Sava river, died of anthrax and 6 people have been hospitalized with light, skin-related symptoms. United Kingdom In November 2008, a drum maker in the United Kingdom who worked with untreated animal skins died from anthrax. In December 2009, an outbreak of anthrax occurred among heroin addicts in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan. History Discovery Robert Koch, a German physician and scientist, first identified the bacterium that caused the anthrax disease in 1875 in Wollstein (now Wolsztyn, Poland). His pioneering work in the late 19th century was one of the first demonstrations that diseases could be caused by microbes. In a groundbreaking series of experiments, he uncovered the lifecycle and means of transmission of anthrax. His experiments not only helped create an understanding of anthrax but also helped elucidate the role of microbes in causing illness at a time when debates still took place over spontaneous generation versus cell theory. Koch went on to study the mechanisms of other diseases and won the 1905 Nobel Prize in Physiology or Medicine for his discovery of the bacterium causing tuberculosis. Although Koch arguably made the greatest theoretical contribution to understanding anthrax, other researchers were more concerned with the practical questions of how to prevent the disease. In Britain, where anthrax affected workers in the wool, worsted, hides, and tanning industries, it was viewed with fear. John Henry Bell, a doctor born & based in Bradford, first made the link between the mysterious and deadly "woolsorter's disease" and anthrax, showing in 1878 that they were one and the same. In the early 20th century, Friederich Wilhelm Eurich, the German bacteriologist who settled in Bradford with his family as a child, carried out important research for the local Anthrax Investigation Board. Eurich also made valuable contributions to a Home Office Departmental Committee of Inquiry, established in 1913 to address the continuing problem of industrial anthrax. His work in this capacity, much of it collaboration with the factory inspector G. Elmhirst Duckering, led directly to the Anthrax Prevention Act (1919). First vaccination Anthrax posed a major economic challenge in France and elsewhere during the 19th century. Horses, cattle, and sheep were particularly vulnerable, and national funds were set aside to investigate the production of a vaccine. French scientist Louis Pasteur was charged with the production of a vaccine, following his successful work in developing methods that helped to protect the important wine and silk industries. In May 1881, Pasteur – in collaboration with his assistants Jean-Joseph Henri Toussaint, Émile Roux and others – performed a public experiment at Pouilly-le-Fort to demonstrate his concept of vaccination. He prepared two groups of 25 sheep, one goat, and several cattle. The animals of one group were twice injected with an anthrax vaccine prepared by Pasteur, at an interval of 15 days; the control group was left unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax bacteria. All the animals in the unvaccinated group died, while all of the animals in the vaccinated group survived. After this apparent triumph, which was widely reported in the local, national, and international press, Pasteur made strenuous efforts to export the vaccine beyond France. He used his celebrity status to establish Pasteur Institutes across Europe and Asia, and his nephew, Adrien Loir, travelled to Australia in 1888 to try to introduce the vaccine to combat anthrax in New South Wales. Ultimately, the vaccine was unsuccessful in the challenging climate of rural Australia, and it was soon superseded by a more robust version developed by local researchers John Gunn and John McGarvie Smith. The human vaccine for anthrax became available in 1954. This was a cell-free vaccine instead of the live-cell Pasteur-style vaccine used for veterinary purposes. An improved cell-free vaccine became available in 1970. Engineered strains The Sterne strain of anthrax, named after the Trieste-born immunologist Max Sterne, is an attenuated strain used as a vaccine, which contains only the anthrax toxin virulence plasmid and not the polyglutamic acid capsule expressing plasmid. Strain 836, created by the Soviet bioweapons program in the 1980s, was later called by the Los Angeles Times "the most virulent and vicious strain of anthrax known to man". The virulent Ames strain, which was used in the 2001 anthrax attacks in the United States, has received the most news coverage of any anthrax outbreak. The Ames strain contains two virulence plasmids, which separately encode for a three-protein toxin, called anthrax toxin, and a polyglutamic acid capsule. Nonetheless, the Vollum strain, developed but never used as a biological weapon during the Second World War, is much more dangerous. The Vollum (also incorrectly referred to as Vellum) strain was isolated in 1935 from a cow in Oxfordshire. This same strain was used during the Gruinard bioweapons trials. A variation of Vollum, known as "Vollum 1B", was used during the 1960s in the US and UK bioweapon programs. Vollum 1B is widely believed to have been isolated from William A. Boyles, a 46-year-old scientist at the US Army Biological Warfare Laboratories at Camp (later Fort) Detrick, Maryland, who died in 1951 after being accidentally infected with the Vollum strain. Society and culture Site cleanup Anthrax spores can survive for very long periods of time in the environment after release. Chemical methods for cleaning anthrax-contaminated sites or materials may use oxidizing agents such as peroxides, ethylene oxide, Sandia Foam, chlorine dioxide (used in the Hart Senate Office Building), peracetic acid, ozone gas, hypochlorous acid, sodium persulfate, and liquid bleach products containing sodium hypochlorite. Nonoxidizing agents shown to be effective for anthrax decontamination include methyl bromide, formaldehyde, and metam sodium. These agents destroy bacterial spores. All of the aforementioned anthrax decontamination technologies have been demonstrated to be effective in laboratory tests conducted by the US EPA or others. Decontamination techniques for Bacillus anthracis spores are affected by the material with which the spores are associated, environmental factors such as temperature and humidity, and microbiological factors such as the spore species, anthracis strain, and test methods used. A bleach solution for treating hard surfaces has been approved by the EPA. Chlorine dioxide has emerged as the preferred biocide against anthrax-contaminated sites, having been employed in the treatment of numerous government buildings over the past decade. Its chief drawback is the need for in situ processes to have the reactant on demand. To speed the process, trace amounts of a nontoxic catalyst composed of iron and tetroamido macrocyclic ligands are combined with sodium carbonate and bicarbonate and converted into a spray. The spray formula is applied to an infested area and is followed by another spray containing tert-butyl hydroperoxide. Using the catalyst method, complete destruction of all anthrax spores can be achieved in under 30 minutes. A standard catalyst-free spray destroys fewer than half the spores in the same amount of time. Cleanups at a Senate Office Building, several contaminated postal facilities, and other US government and private office buildings, a collaborative effort headed by the Environmental Protection Agency showed decontamination to be possible, but time-consuming and costly. Clearing the Senate Office Building of anthrax spores cost $27 million, according to the Government Accountability Office. Cleaning the Brentwood postal facility in Washington cost $130 million and took 26 months. Since then, newer and less costly methods have been developed. Cleanup of anthrax-contaminated areas on ranches and in the wild is much more problematic. Carcasses may be burned, though often 3 days are needed to burn a large carcass and this is not feasible in areas with little wood. Carcasses may also be buried, though the burying of large animals deeply enough to prevent resurfacing of spores requires much manpower and expensive tools. Carcasses have been soaked in formaldehyde to kill spores, though this has environmental contamination issues. Block burning of vegetation in large areas enclosing an anthrax outbreak has been tried; this, while environmentally destructive, causes healthy animals to move away from an area with carcasses in search of fresh grass. Some wildlife workers have experimented with covering fresh anthrax carcasses with shadecloth and heavy objects. This prevents some scavengers from opening the carcasses, thus allowing the putrefactive bacteria within the carcass to kill the vegetative B. anthracis cells and preventing sporulation. This method also has drawbacks, as scavengers such as hyenas are capable of infiltrating almost any exclosure. The experimental site at Gruinard Island is said to have been decontaminated with a mixture of formaldehyde and seawater by the Ministry of Defence. It is not clear whether similar treatments had been applied to US test sites. Biological warfare Anthrax spores have been used as a biological warfare weapon. Its first modern incidence occurred when Nordic rebels, supplied by the German General Staff, used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. Anthrax was first tested as a biological warfare agent by Unit 731 of the Japanese Kwantung Army in Manchuria during the 1930s; some of this testing involved intentional infection of prisoners of war, thousands of whom died. Anthrax, designated at the time as Agent N, was also investigated by the Allies in the 1940s. In 1942, British scientists at Porton Down began research on Operation Vegetarian, an ultimately unused biowarfare military operation plan which called for animal feed pellets containing linseed infected with anthrax spores of the Vollum-14578 strain to be dropped by air over the countryside of Nazi Germany. The pellets would be eaten by cattle, which would in turn be eaten by the human population and as such severely disrupt the German war effort. In the same year, bioweapons tests were carried out on the uninhabited Gruinard Island in the Scottish Highlands, with Porton Down scientists studying the effect of anthrax on the island's population of sheep. Ultimately, five million pellets were created, though plans to drop them over Germany using Royal Air Force bombers in 1944 were scrapped after the success of Operation Overlord and the subsequent Allied liberation of France. All pellets were destroyed using incinerators in 1945. Weaponized anthrax was part of the US stockpile prior to 1972, when the United States signed the Biological Weapons Convention. President Nixon ordered the dismantling of US biowarfare programs in 1969 and the destruction of all existing stockpiles of bioweapons. In 1978–79, the Rhodesian government used anthrax against cattle and humans during its campaign against rebels. The Soviet Union created and stored 100 to 200 tons of anthrax spores at Kantubek on Vozrozhdeniya Island; they were abandoned in 1992 and destroyed in 2002. American military and British Army personnel are no longer routinely vaccinated against anthrax prior to active service in places where biological attacks are considered a threat. Sverdlovsk incident (2 April 1979) Despite signing the 1972 agreement to end bioweapon production, the government of the Soviet Union had an active bioweapons program that included the production of hundreds of tons of anthrax after this period. On 2 April 1979, some of the over one million people living in Sverdlovsk (now called Ekaterinburg, Russia), about east of Moscow, were exposed to an accidental release of anthrax from a biological weapons complex located near there. At least 94 people were infected, of whom at least 68 died. One victim died four days after the release, 10 over an eight-day period at the peak of the deaths, and the last six weeks later. Extensive cleanup, vaccinations, and medical interventions managed to save about 30 of the victims. Extensive cover-ups and destruction of records by the KGB continued from 1979 until Russian President Boris Yeltsin admitted this anthrax accident in 1992. Jeanne Guillemin reported in 1999 that a combined Russian and United States team investigated the accident in 1992. Nearly all of the night-shift workers of a ceramics plant directly across the street from the biological facility (compound 19) became infected, and most died. Since most were men, some NATO governments suspected the Soviet Union had developed a sex-specific weapon. The government blamed the outbreak on the consumption of anthrax-tainted meat, and ordered the confiscation of all uninspected meat that entered the city. They also ordered all stray dogs to be shot and people not have contact with sick animals. Also, a voluntary evacuation and anthrax vaccination program was established for people from 18 to 55. To support the cover-up story, Soviet medical and legal journals published articles about an outbreak in livestock that caused gastrointestinal anthrax in people having consumed infected meat, and cutaneous anthrax in people having come into contact with the animals. All medical and public health records were confiscated by the KGB. In addition to the medical problems the outbreak caused, it also prompted Western countries to be more suspicious of a covert Soviet bioweapons program and to increase their surveillance of suspected sites. In 1986, the US government was allowed to investigate the incident, and concluded the exposure was from aerosol anthrax from a military weapons facility. In 1992, President Yeltsin admitted he was "absolutely certain" that "rumors" about the Soviet Union violating the 1972 Bioweapons Treaty were true. The Soviet Union, like the US and UK, had agreed to submit information to the UN about their bioweapons programs, but omitted known facilities and never acknowledged their weapons program. Anthrax bioterrorism In theory, anthrax spores can be cultivated with minimal special equipment and a first-year collegiate microbiological education. To make large amounts of an aerosol form of anthrax suitable for biological warfare requires extensive practical knowledge, training, and highly advanced equipment. Concentrated anthrax spores were used for bioterrorism in the 2001 anthrax attacks in the United States, delivered by mailing postal letters containing the spores. The letters were sent to several news media offices and two Democratic senators: Tom Daschle of South Dakota and Patrick Leahy of Vermont. As a result, 22 were infected and five died. Only a few grams of material were used in these attacks and in August 2008, the US Department of Justice announced they believed that Bruce Ivins, a senior biodefense researcher employed by the United States government, was responsible. These events also spawned many anthrax hoaxes. Due to these events, the US Postal Service installed biohazard detection systems at its major distribution centers to actively scan for anthrax being transported through the mail. As of 2020, no positive alerts by these systems have occurred. Decontaminating mail In response to the postal anthrax attacks and hoaxes, the United States Postal Service sterilized some mail using gamma irradiation and treatment with a proprietary enzyme formula supplied by Sipco Industries. A scientific experiment performed by a high school student, later published in the Journal of Medical Toxicology, suggested a domestic electric iron at its hottest setting (at least ) used for at least 5 minutes should destroy all anthrax spores in a common postal envelope. Other animals Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001. Anthrax outbreaks occur in some wild animal populations with some regularity. Russian researchers estimate arctic permafrost contains around 1.5 million anthrax-infected reindeer carcasses, and the spores may survive in the permafrost for 105 years. A risk exists that global warming in the Arctic can thaw the permafrost, releasing anthrax spores in the carcasses. In 2016, an anthrax outbreak in reindeer was linked to a 75-year-old carcass that defrosted during a heat wave.
Biology and health sciences
Infectious disease
null
42937
https://en.wikipedia.org/wiki/Photodiode
Photodiode
A photodiode is a semiconductor diode sensitive to photon radiation, such as visible light, infrared or ultraviolet radiation, X-rays and gamma rays. It produces an electrical current when it absorbs photons. This can be used for detection and measurement applications, or for the generation of electrical power in solar cells. Photodiodes are used in a wide range of applications throughout the electromagnetic spectrum from visible light photocells to gamma ray spectrometers. Principle of operation A photodiode is a PIN structure or p–n junction. When a photon of sufficient energy strikes the diode, it creates an electron–hole pair. This mechanism is also known as the inner photoelectric effect. If the absorption occurs in the junction's depletion region, or one diffusion length away from it, these carriers are swept from the junction by the built-in electric field of the depletion region. Thus holes move toward the anode, and electrons toward the cathode, and a photocurrent is produced. The total current through the photodiode is the sum of the dark current (current that is passed in the absence of light) and the photocurrent, so the dark current must be minimized to maximize the sensitivity of the device. To first order, for a given spectral distribution, the photocurrent is linearly proportional to the irradiance. Photovoltaic mode In photovoltaic mode (zero bias), photocurrent flows into the anode through a short circuit to the cathode. If the circuit is opened or has a load impedance, restricting the photocurrent out of the device, a voltage builds up in the direction that forward biases the diode, that is, anode positive with respect to cathode. If the circuit is shorted or the impedance is low, a forward current will consume all or some of the photocurrent. This mode exploits the photovoltaic effect, which is the basis for solar cells – a traditional solar cell is just a large area photodiode. For optimum power output, the photovoltaic cell will be operated at a voltage that causes only a small forward current compared to the photocurrent. Photoconductive mode In photoconductive mode the diode is reverse biased, that is, with the cathode driven positive with respect to the anode. This reduces the response time because the additional reverse bias increases the width of the depletion layer, which decreases the junction's capacitance and increases the region with an electric field that will cause electrons to be quickly collected. The reverse bias also creates dark current without much change in the photocurrent. Although this mode is faster, the photoconductive mode can exhibit more electronic noise due to dark current or avalanche effects. The leakage current of a good PIN diode is so low (<1 nA) that the Johnson–Nyquist noise of the load resistance in a typical circuit often dominates. Related devices Avalanche photodiodes are photodiodes with structure optimized for operating with high reverse bias, approaching the reverse breakdown voltage. This allows each photo-generated carrier to be multiplied by avalanche breakdown, resulting in internal gain within the photodiode, which increases the effective responsivity of the device. A phototransistor is a light-sensitive transistor. A common type of phototransistor, the bipolar phototransistor, is in essence a bipolar transistor encased in a transparent case so that light can reach the base–collector junction. It was invented by John N. Shive at Bell Labs in 1948 but it was not announced until 1950. The electrons that are generated by photons in the base–collector junction are injected into the base, and this photodiode current is amplified by the transistor's current gain β (or hfe). If the base and collector leads are used and the emitter is left unconnected, the phototransistor becomes a photodiode. While phototransistors have a higher responsivity for light they are not able to detect low levels of light any better than photodiodes. Phototransistors also have significantly longer response times. Another type of phototransistor, the field-effect phototransistor (also known as photoFET), is a light-sensitive field-effect transistor. Unlike photobipolar transistors, photoFETs control drain-source current by creating a gate voltage. A solaristor is a two-terminal gate-less phototransistor. A compact class of two-terminal phototransistors or solaristors have been demonstrated in 2018 by ICN2 researchers. The novel concept is a two-in-one power source plus transistor device that runs on solar energy by exploiting a memresistive effect in the flow of photogenerated carriers. Materials The material used to make a photodiode is critical to defining its properties, because only photons with sufficient energy to excite electrons across the material's bandgap will produce significant photocurrents. Materials commonly used to produce photodiodes are listed in the table below. Because of their greater bandgap, silicon-based photodiodes generate less noise than germanium-based photodiodes. Binary materials, such as MoS2, and graphene emerged as new materials for the production of photodiodes. Unwanted and wanted photodiode effects Any p–n junction, if illuminated, is potentially a photodiode. Semiconductor devices such as diodes, transistors and ICs contain p–n junctions, and will not function correctly if they are illuminated by unwanted light. This is avoided by encapsulating devices in opaque housings. If these housings are not completely opaque to high-energy radiation (ultraviolet, X-rays, gamma rays), diodes, transistors and ICs can malfunction due to induced photo-currents. Background radiation from the packaging is also significant. Radiation hardening mitigates these effects. In some cases, the effect is actually wanted, for example to use LEDs as light-sensitive devices (see LED as light sensor) or even for energy harvesting, then sometimes called light-emitting and light-absorbing diodes (LEADs). Features Critical performance parameters of a photodiode include spectral responsivity, dark current, response time and noise-equivalent power. Spectral responsivity The spectral responsivity is a ratio of the generated photocurrent to incident light power, expressed in A/W when used in photoconductive mode. The wavelength-dependence may also be expressed as a quantum efficiency or the ratio of the number of photogenerated carriers to incident photons which is a unitless quantity. Dark current The dark current is the current through the photodiode in the absence of light, when it is operated in photoconductive mode. The dark current includes photocurrent generated by background radiation and the saturation current of the semiconductor junction. Dark current must be accounted for by calibration if a photodiode is used to make an accurate optical power measurement, and it is also a source of noise when a photodiode is used in an optical communication system. Response time The response time is the time required for the detector to respond to an optical input. A photon absorbed by the semiconducting material will generate an electron–hole pair which will in turn start moving in the material under the effect of the electric field and thus generate a current. The finite duration of this current is known as the transit-time spread and can be evaluated by using Ramo's theorem. One can also show with this theorem that the total charge generated in the external circuit is e and not 2e as one might expect by the presence of the two carriers. Indeed, the integral of the current due to both electron and hole over time must be equal to e. The resistance and capacitance of the photodiode and the external circuitry give rise to another response time known as RC time constant (). This combination of R and C integrates the photoresponse over time and thus lengthens the impulse response of the photodiode. When used in an optical communication system, the response time determines the bandwidth available for signal modulation and thus data transmission. Noise-equivalent power Noise-equivalent power (NEP) is the minimum input optical power to generate photocurrent, equal to the rms noise current in a 1 hertz bandwidth. NEP is essentially the minimum detectable power. The related characteristic detectivity () is the inverse of NEP (1/NEP) and the specific detectivity () is the detectivity multiplied by the square root of the area () of the photodetector () for a 1 Hz bandwidth. The specific detectivity allows different systems to be compared independent of sensor area and system bandwidth; a higher detectivity value indicates a low-noise device or system. Although it is traditional to give () in many catalogues as a measure of the diode's quality, in practice, it is hardly ever the key parameter. When a photodiode is used in an optical communication system, all these parameters contribute to the sensitivity of the optical receiver which is the minimum input power required for the receiver to achieve a specified bit error rate. Applications P–n photodiodes are used in similar applications to other photodetectors, such as photoconductors, charge-coupled devices (CCD), and photomultiplier tubes. They may be used to generate an output which is dependent upon the illumination (analog for measurement), or to change the state of circuitry (digital, either for control and switching or for digital signal processing). Photodiodes are used in consumer electronics devices such as compact disc players, smoke detectors, medical devices and the receivers for infrared remote control devices used to control equipment from televisions to air conditioners. For many applications either photodiodes or photoconductors may be used. Either type of photosensor may be used for light measurement, as in camera light meters, or to respond to light levels, as in switching on street lighting after dark. Photosensors of all types may be used to respond to incident light or to a source of light which is part of the same circuit or system. A photodiode is often combined into a single component with an emitter of light, usually a light-emitting diode (LED), either to detect the presence of a mechanical obstruction to the beam (slotted optical switch) or to couple two digital or analog circuits while maintaining extremely high electrical isolation between them, often for safety (optocoupler). The combination of LED and photodiode is also used in many sensor systems to characterize different types of products based on their optical absorbance. Photodiodes are often used for accurate measurement of light intensity in science and industry. They generally have a more linear response than photoconductors. They are also widely used in various medical applications, such as detectors for computed tomography (coupled with scintillators), instruments to analyze samples (immunoassay), and pulse oximeters. PIN diodes are much faster and more sensitive than p–n junction diodes, and hence are often used for optical communications and in lighting regulation. P–n photodiodes are not used to measure extremely low light intensities. Instead, if high sensitivity is needed, avalanche photodiodes, intensified charge-coupled devices or photomultiplier tubes are used for applications such as astronomy, spectroscopy, night vision equipment and laser rangefinding. Comparison with photomultipliers Advantages compared to photomultipliers: Excellent linearity of output current as a function of incident light Spectral response from 190 nm to 1100 nm (silicon), longer wavelengths with other semiconductor materials Low noise Ruggedized to mechanical stress Low cost Compact and light weight Long lifetime High quantum efficiency, typically 60–80% No high voltage required Disadvantages compared to photomultipliers: Small area No internal gain (except avalanche photodiodes, but their gain is typically 102–103 compared to 105-108 for the photomultiplier) Much lower overall sensitivity Photon counting only possible with specially designed, usually cooled photodiodes, with special electronic circuits Response time for many designs is slower Latent effect Pinned photodiode The pinned photodiode (PPD) has a shallow implant (P+ or N+) in N-type or P-type diffusion layer, respectively, over a P-type or N-type (respectively) substrate layer, such that the intermediate diffusion layer can be fully depleted of majority carriers, like the base region of a bipolar junction transistor. The PPD (usually PNP) is used in CMOS active-pixel sensors; a precursor NPNP triple junction variant with the MOS buffer capacitor and the back-light illumination scheme with complete charge transfer and no image lag was invented by Sony in 1975. This scheme was widely used in many applications of charge transfer devices. Early charge-coupled device image sensors suffered from shutter lag. This was largely explained with the re-invention of the pinned photodiode. It was developed by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. Sony in 1975 recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented by Sony in 1975, developed by NEC in 1982 by Kodak in 1984 was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD sensors, becoming a fixture in consumer electronic video cameras and then digital still cameras. A CMOS image sensor with a low-voltage-PPD technology was first fabricated in 1995 by a joint JPL and Kodak team. The CMOS sensor with PPD technology was further advanced and refined by R.M. Guidash in 1997, K. Yonemoto and H. Sumi in 2000, and I. Inoue in 2003. This led to CMOS sensors achieve imaging performance on par with CCD sensors, and later exceeding CCD sensors. Photodiode array A one-dimensional array of hundreds or thousands of photodiodes can be used as a position sensor, for example as part of an angle sensor. A two-dimensional array is used in image sensors and optical mice. In some applications, photodiode arrays allow for high-speed parallel readout, as opposed to integrating scanning electronics as in a charge-coupled device (CCD) or CMOS sensor. The optical mouse chip shown in the photo has parallel (not multiplexed) access to all 16 photodiodes in its 4 × 4 array. Passive-pixel image sensor The passive-pixel sensor (PPS) is a type of photodiode array. It was the precursor to the active-pixel sensor (APS). A passive-pixel sensor consists of passive pixels which are read out without amplification, with each pixel consisting of a photodiode and a MOSFET switch. In a photodiode array, pixels contain a p–n junction, integrated capacitor, and MOSFETs as selection transistors. A photodiode array was proposed by G. Weckler in 1968, predating the CCD. This was the basis for the PPS. The noise of photodiode arrays is sometimes a limitation to performance. It was not possible to fabricate active pixel sensors with a practical pixel size in the 1970s, due to limited microlithography technology at the time.
Technology
Components
null
42957
https://en.wikipedia.org/wiki/Endospore
Endospore
An endospore is a dormant, tough, and non-reproductive structure produced by some bacteria in the phylum Bacillota. The name "endospore" is suggestive of a spore or seed-like form (endo means 'within'), but it is not a true spore (i.e., not an offspring). It is a stripped-down, dormant form to which the bacterium can reduce itself. Endospore formation is usually triggered by a lack of nutrients, and usually occurs in gram-positive bacteria. In endospore formation, the bacterium divides within its cell wall, and one side then engulfs the other. Endospores enable bacteria to lie dormant for extended periods, even centuries. There are many reports of spores remaining viable over 10,000 years, and revival of spores millions of years old has been claimed. There is one report of viable spores of Bacillus marismortui in salt crystals approximately 25 million years old. When the environment becomes more favorable, the endospore can reactivate itself into a vegetative state. Most types of bacteria cannot change to the endospore form. Examples of bacterial species that can form endospores include Bacillus cereus, Bacillus anthracis, Bacillus thuringiensis, Clostridium botulinum, and Clostridium tetani. Endospore formation is not found among Archaea. The endospore consists of the bacterium's DNA, ribosomes and large amounts of dipicolinic acid. Dipicolinic acid is a spore-specific chemical that appears to help in the ability for endospores to maintain dormancy. This chemical accounts for up to 10% of the spore's dry weight. Endospores can survive without nutrients. They are resistant to ultraviolet radiation, desiccation, high temperature, extreme freezing and chemical disinfectants. Thermo-resistant endospores were first hypothesized by Ferdinand Cohn after studying Bacillus subtilis growth on cheese after boiling the cheese. His notion of spores being the reproductive mechanism for the growth was a large blow to the previous suggestions of spontaneous generation. Astrophysicist Steinn Sigurdsson said "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Common antibacterial agents that work by destroying vegetative cell walls do not affect endospores. Endospores are commonly found in soil and water, where they may survive for long periods of time. A variety of different microorganisms form "spores" or "cysts", but the endospores of low G+C gram-positive bacteria are by far the most resistant to harsh conditions. Some classes of bacteria can turn into exospores, also known as microbial cysts, instead of endospores. Exospores and endospores are two kinds of "hibernating" or dormant stages seen in some classes of microorganisms. Life cycle of bacteria The bacterial life cycle does not necessarily include sporulation. Sporulation is usually triggered by adverse environmental conditions, so as to help the survival of the bacterium. Endospores exhibit no signs of life and can thus be described as cryptobiotic. Endospores retain viability indefinitely and they can germinate into vegetative cells under the appropriate conditions. Endospores have survived thousands of years until environmental stimuli trigger germination. They have been characterized as the most durable cells produced in nature. Structure Bacteria produce a single endospore internally. The spore is sometimes surrounded by a thin covering known as the exosporium, which overlies the spore coat. The spore coat, which acts like a sieve that excludes large toxic molecules like lysozyme, is resistant to many toxic molecules and may also contain enzymes that are involved in germination. In Bacillus subtilus endospores, the spore coat is estimated to contain more than 70 coat proteins, which are organized into an inner and an outer coat layer. The X-ray diffraction pattern of purified B. subtilis endospores indicates the presence of a component with a regular periodic structure, which Kadota and Iijima speculated might be formed from a keratin-like protein. However, after further studies this group concluded that the structure of the spore coat protein was different from keratin. When the B. subtilis genome was sequenced, no ortholog of human keratin was detected. The cortex lies beneath the spore coat and consists of peptidoglycan. The core wall lies beneath the cortex and surrounds the protoplast or core of the endospore. The core contains the spore chromosomal DNA which is encased in chromatin-like proteins known as SASPs (small acid-soluble spore proteins), that protect the spore DNA from UV radiation and heat. The core also contains normal cell structures, such as ribosomes and other enzymes, but is not metabolically active. Up to 20% of the dry weight of the endospore consists of calcium dipicolinate within the core, which is thought to stabilize the DNA. Dipicolinic acid could be responsible for the heat resistance of the spore, and calcium may aid in resistance to heat and oxidizing agents. However, mutants resistant to heat but lacking dipicolinic acid have been isolated, suggesting other mechanisms contributing to heat resistance are also at work. Small acid-soluble proteins (SASPs) are found in endospores. These proteins tightly bind and condense the DNA, and are in part responsible for resistance to UV light and DNA-damaging chemicals. Visualising endospores under light microscopy can be difficult due to the impermeability of the endospore wall to dyes and stains. While the rest of a bacterial cell may stain, the endospore is left colourless. To combat this, a special stain technique called a Moeller stain is used. That allows the endospore to show up as red, while the rest of the cell stains blue. Another staining technique for endospores is the Schaeffer-Fulton stain, which stains endospores green and bacterial bodies red. The arrangement of spore layers is as follows: Exosporium Spore coat Spore cortex Core wall Location The position of the endospore differs among bacterial species and is useful in identification. The main types within the cell are terminal, subterminal, and centrally placed endospores. Terminal endospores are seen at the poles of cells, whereas central endospores are more or less in the middle. Subterminal endospores are those between these two extremes, usually seen far enough towards the poles but close enough to the center so as not to be considered either terminal or central. Lateral endospores are seen occasionally. Examples of bacteria having terminal endospores include Clostridium tetani, the pathogen that causes the disease tetanus. Bacteria having a centrally placed endospore include Bacillus cereus. Sometimes the endospore can be so large the cell can be distended around the endospore. This is typical of Clostridium tetani. Formation and destruction Under conditions of starvation, especially the lack of carbon and nitrogen sources, a single endospore forms within some of the bacteria through a process called sporulation. When a bacterium detects environmental conditions are becoming unfavourable it may start the process of endosporulation, which takes about eight hours. The DNA is replicated and a membrane wall known as a spore septum begins to form between it and the rest of the cell. The plasma membrane of the cell surrounds this wall and pinches off to leave a double membrane around the DNA, and the developing structure is now known as a forespore. Calcium dipicolinate, the calcium salt of dipicolinic acid, is incorporated into the forespore during this time. The dipicolinic acid helps stabilize the proteins and DNA in the endospore. Next the peptidoglycan cortex forms between the two layers and the bacterium adds a spore coat to the outside of the forespore. In the final stages of endospore formation the newly forming endospore is dehydrated and allowed to mature before being released from the mother cell. The cortex is what makes the endospore so resistant to temperature. The cortex contains an inner membrane known as the core. The inner membrane that surrounds this core leads to the endospore's resistance against UV light and harsh chemicals that would normally destroy microbes. Sporulation is now complete, and the mature endospore will be released when the surrounding vegetative cell is degraded. Endospores are resistant to most agents that would normally kill the vegetative cells they formed from. Unlike persister cells, endospores are the result of a morphological differentiation process triggered by nutrient limitation (starvation) in the environment; endosporulation is initiated by quorum sensing within the "starving" population.Most disinfectants such as household cleaning products, alcohols, quaternary ammonium compounds and detergents have little effect on endospores. However, sterilant alkylating agents such as ethylene oxide (ETO), and 10% bleach are effective against endospores. To kill most anthrax spores, standard household bleach (with 10% sodium hypochlorite) must be in contact with the spores for at least several minutes; a very small proportion of spores can survive longer than 10 minutes in such a solution. Higher concentrations of bleach are not more effective, and can cause some types of bacteria to aggregate and thus survive. While significantly resistant to heat and radiation, endospores can be destroyed by burning or by autoclaving at a temperature exceeding the boiling point of water, 100 °C. Endospores are able to survive at 100 °C for hours, although the larger the number of hours the fewer that will survive. An indirect way to destroy them is to place them in an environment that reactivates them to their vegetative state. They will germinate within a day or two with the right environmental conditions, and then the vegetative cells, not as hardy as endospores, can be straightforwardly destroyed. This indirect method is called Tyndallization. It was the usual method for a while in the late 19th century before the introduction of inexpensive autoclaves. Prolonged exposure to ionising radiation, such as x-rays and gamma rays, will also kill most endospores. The endospores of certain types of (typically non-pathogenic) bacteria, such as Geobacillus stearothermophilus, are used as probes to verify that an autoclaved item has been rendered truly sterile: a small capsule containing the spores is put into the autoclave with the items; after the cycle the content of the capsule is cultured to check if anything will grow from it. If nothing will grow, then the spores were destroyed and the sterilization was successful. In hospitals, endospores on delicate invasive instruments such as endoscopes are killed by low-temperature, and non-corrosive, ethylene oxide sterilizers. Ethylene oxide is the only low-temperature sterilant to stop outbreaks on these instruments. In contrast, "high level disinfection" does not kill endospores but is used for instruments such as a colonoscope that do not enter sterile bodily cavities. This latter method uses only warm water, enzymes, and detergents. Bacterial endospores are resistant to antibiotics, most disinfectants, and physical agents such as radiation, boiling, and drying. The impermeability of the spore coat is thought to be responsible for the endospore's resistance to chemicals. The heat resistance of endospores is due to a variety of factors: Calcium dipicolinate, abundant within the endospore, may stabilize and protect the endospore's DNA. Small acid-soluble proteins (SASPs) saturate the endospore's DNA and protect it from heat, drying, chemicals, and radiation. They also function as a carbon and energy source for the development of a vegetative bacterium during germination. The cortex may osmotically remove water from the interior of the endospore and the dehydration that results is thought to be very important in the endospore's resistance to heat and radiation. Finally, DNA repair enzymes contained within the endospore are able to repair damaged DNA during germination. Reactivation Reactivation of the endospore occurs when conditions are more favourable and involves activation, germination, and outgrowth. Even if an endospore is located in plentiful nutrients, it may fail to germinate unless activation has taken place. This may be triggered by heating the endospore. Germination involves the dormant endospore starting metabolic activity and thus breaking hibernation. It is commonly characterised by rupture or absorption of the spore coat, swelling of the endospore, an increase in metabolic activity, and loss of resistance to environmental stress. Outgrowth follows germination and involves the core of the endospore manufacturing new chemical components and exiting the old spore coat to develop into a fully functional vegetative bacterial cell, which can divide to produce more cells. Endospores possess five times more sulfur than vegetative cells. This excess sulfur is concentrated in spore coats as an amino acid, cysteine. It is believed that the macromolecule accountable for maintaining the dormant state has a protein coat rich in cystine, stabilized by S-S linkages. A reduction in these linkages has the potential to change the tertiary structure, causing the protein to unfold. This conformational change in the protein is thought to be responsible for exposing active enzymatic sites necessary for endospore germination. Endospores can stay dormant for a very long time. For instance, endospores were found in the tombs of the Egyptian pharaohs. When placed in appropriate medium, under appropriate conditions, they were able to be reactivated. In 1995, Raul Cano of California Polytechnic State University found bacterial spores in the gut of a fossilized bee trapped in amber from a tree in the Dominican Republic. The bee fossilized in amber was dated to being about 25 million years old. The spores germinated when the amber was cracked open and the material from the gut of the bee was extracted and placed in nutrient medium. After the spores were analyzed by microscopy, it was determined that the cells were very similar to Lysinibacillus sphaericus which is found in bees in the Dominican Republic today. Importance As a simplified model for cellular differentiation, the molecular details of endospore formation have been extensively studied, specifically in the model organism Bacillus subtilis. These studies have contributed much to our understanding of the regulation of gene expression, transcription factors, and the sigma factor subunits of RNA polymerase. Endospores of the bacterium Bacillus anthracis were used in the 2001 anthrax attacks. The powder found in contaminated postal letters consisted of anthrax endospores. This intentional distribution led to 22 known cases of anthrax (11 inhalation and 11 cutaneous). The case fatality rate among those patients with inhalation anthrax was 45% (5/11). The six other individuals with inhalation anthrax and all the individuals with cutaneous anthrax recovered. Had it not been for antibiotic therapy, many more might have been stricken. According to WHO veterinary documents, B. anthracis sporulates when it sees oxygen instead of the carbon dioxide present in mammal blood; this signals to the bacteria that it has reached the end of the animal, and an inactive dispersable morphology is useful. Sporulation requires the presence of free oxygen. In the natural situation, this means the vegetative cycles occur within the low oxygen environment of the infected host and, within the host, the organism is exclusively in the vegetative form. Once outside the host, sporulation commences upon exposure to the air and the spore forms are essentially the exclusive phase in the environment. Biotechnology Bacillus subtilis spores are useful for the expression of recombinant proteins and in particular for the surface display of peptides and proteins as a tool for fundamental and applied research in the fields of microbiology, biotechnology and vaccination. Endospore-forming bacteria Examples of endospore-forming bacteria include the genera: Acetonema Actinomyces Alkalibacillus Ammoniphilus Amphibacillus Anaerobacter Anaerospora Aneurinibacillus Anoxybacillus Bacillus Brevibacillus Caldanaerobacter Caloramator Caminicella Cerasibacillus Clostridium Clostridiisalibacter Cohnella Coxiella (i.e. Coxiella burnetii) Dendrosporobacter Desulfotomaculum Desulfosporomusa Desulfosporosinus Desulfovirgula Desulfunispora Desulfurispora Filifactor Filobacillus Gelria Geobacillus Geosporobacter Gracilibacillus Halobacillus Halonatronum Heliobacterium Heliophilum Laceyella Lentibacillus Lysinibacillus Mahella Metabacterium Moorella Natroniella Oceanobacillus Orenia Ornithinibacillus Oxalophagus Oxobacter Paenibacillus Paraliobacillus Pelospora Pelotomaculum Piscibacillus Planifilum Pontibacillus Propionispora Salinibacillus Salsuginibacillus Seinonella Shimazuella Sporacetigenium Sporoanaerobacter Sporobacter Sporobacterium Sporohalobacter Sporolactobacillus Sporomusa Sporosarcina Sporotalea Sporotomaculum Syntrophomonas Syntrophospora Tenuibacillus Tepidibacter Terribacillus Thalassobacillus Thermoacetogenium Thermoactinomyces Thermoalkalibacillus Thermoanaerobacter Thermoanaeromonas Thermobacillus Thermoflavimicrobium Thermovenabulum Tuberibacillus Virgibacillus Vulcanobacillus
Biology and health sciences
Basic anatomy
Biology
42964
https://en.wikipedia.org/wiki/Snell%27s%20law
Snell's law
Snell's law (also known as the Snell–Descartes law, the ibn-Sahl law, and the law of refraction) is a formula used to describe the relationship between the angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media, such as water, glass, or air. In optics, the law is used in ray tracing to compute the angles of incidence or refraction, and in experimental optics to find the refractive index of a material. The law is also satisfied in meta-materials, which allow light to be bent "backward" at a negative angle of refraction with a negative refractive index. The law states that, for a given pair of media, the ratio of the sines of angle of incidence () and angle of refraction () is equal to the refractive index of the second medium with regard to the first () which is equal to the ratio of the refractive indices () of the two media, or equivalently, to the ratio of the phase velocities () in the two media. The law follows from Fermat's principle of least time, which in turn follows from the propagation of light as waves. History Ptolemy, in Alexandria, Egypt, had found a relationship regarding refraction angles, but it was inaccurate for angles that were not small. Ptolemy was confident he had found an accurate empirical law, partially as a result of slightly altering his data to fit theory (see: confirmation bias). The law was eventually named after Snell, although it was first discovered by the Persian scientist Ibn Sahl, at Baghdad court in 984. In the manuscript On Burning Mirrors and Lenses, Sahl used the law to derive lens shapes that focus light with no geometric aberration. Alhazen, in his Book of Optics (1021), came close to rediscovering the law of refraction, but he did not take this step. The law was rediscovered by Thomas Harriot in 1602, who however did not publish his results although he had corresponded with Kepler on this very subject. In 1621, the Dutch astronomer Willebrord Snellius (1580–1626)—Snell—derived a mathematically equivalent form, that remained unpublished during his lifetime. René Descartes independently derived the law using heuristic momentum conservation arguments in terms of sines in his 1637 essay Dioptrique, and used it to solve a range of optical problems. Rejecting Descartes' solution, Pierre de Fermat arrived at the same solution based solely on his principle of least time. Descartes assumed the speed of light was infinite, yet in his derivation of Snell's law he also assumed the denser the medium, the greater the speed of light. Fermat supported the opposing assumptions, i.e., the speed of light is finite, and his derivation depended upon the speed of light being slower in a denser medium. Fermat's derivation also utilized his invention of adequality, a mathematical procedure equivalent to differential calculus, for finding maxima, minima, and tangents. In his influential mathematics book Geometry, Descartes solves a problem that was worked on by Apollonius of Perga and Pappus of Alexandria. Given n lines L and a point P(L) on each line, find the locus of points Q such that the lengths of the line segments QP(L) satisfy certain conditions. For example, when n = 4, given the lines a, b, c, and d and a point A on a, B on b, and so on, find the locus of points Q such that the product QA*QB equals the product QC*QD. When the lines are not all parallel, Pappus showed that the loci are conics, but when Descartes considered larger n, he obtained cubic and higher degree curves. To show that the cubic curves were interesting, he showed that they arose naturally in optics from Snell's law. According to Dijksterhuis, "In De natura lucis et proprietate (1662) Isaac Vossius said that Descartes had seen Snell's paper and concocted his own proof. We now know this charge to be undeserved but it has been adopted many times since." Both Fermat and Huygens repeated this accusation that Descartes had copied Snell. In French, Snell's Law is sometimes called "la loi de Descartes" or more frequently "loi de Snell-Descartes". In his 1678 Traité de la Lumière, Christiaan Huygens showed how Snell's law of sines could be explained by, or derived from, the wave nature of light, using what we have come to call the Huygens–Fresnel principle. With the development of modern optical and electromagnetic theory, the ancient Snell's law was brought into a new stage. In 1962, Nicolaas Bloembergen showed that at the boundary of nonlinear medium, the Snell's law should be written in a general form. In 2008 and 2011, plasmonic metasurfaces were also demonstrated to change the reflection and refraction directions of light beam. Explanation Snell's law is used to determine the direction of light rays through refractive media with varying indices of refraction. The indices of refraction of the media, labeled , and so on, are used to represent the factor by which a light ray's speed decreases when traveling through a refractive medium, such as glass or water, as opposed to its velocity in a vacuum. As light passes the border between media, depending upon the relative refractive indices of the two media, the light will either be refracted to a lesser angle, or a greater one. These angles are measured with respect to the normal line, represented perpendicular to the boundary. In the case of light traveling from air into water, light would be refracted towards the normal line, because the light is slowed down in water; light traveling from water to air would refract away from the normal line. Refraction between two surfaces is also referred to as reversible because if all conditions were identical, the angles would be the same for light propagating in the opposite direction. Snell's law is generally true only for isotropic or specular media (such as glass). In anisotropic media such as some crystals, birefringence may split the refracted ray into two rays, the ordinary or o-ray which follows Snell's law, and the other extraordinary or e-ray which may not be co-planar with the incident ray. When the light or other wave involved is monochromatic, that is, of a single frequency, Snell's law can also be expressed in terms of a ratio of wavelengths in the two media, and : Derivations and formula Snell's law can be derived in various ways. Derivation from Fermat's principle Snell's law can be derived from Fermat's principle, which states that the light travels the path which takes the least time. By taking the derivative of the optical path length, the stationary point is found giving the path taken by the light. (There are situations of light violating Fermat's principle by not taking the least time path, as in reflection in a (spherical) mirror.) In a classic analogy, the area of lower refractive index is replaced by a beach, the area of higher refractive index by the sea, and the fastest way for a rescuer on the beach to get to a drowning person in the sea is to run along a path that follows Snell's law. As shown in the figure to the right, assume the refractive index of medium 1 and medium 2 are and respectively. Light enters medium 2 from medium 1 via point O. is the angle of incidence, is the angle of refraction with respect to the normal. The phase velocities of light in medium 1 and medium 2 are and respectively. is the speed of light in vacuum. Let T be the time required for the light to travel from point Q through point O to point P. where a, b, l and x are as denoted in the right-hand figure, x being the varying parameter. To minimize it, one can differentiate : (stationary point) Note that and Therefore, Derivation from Huygens's principle Alternatively, Snell's law can be derived using interference of all possible paths of light wave from source to observer—it results in destructive interference Derivation from Maxwell's equations Another way to derive Snell's Law involves an application of the general boundary conditions of Maxwell equations for electromagnetic radiation and induction. Derivation from conservation of energy and momentum Yet another way to derive Snell's law is based on translation symmetry considerations. For example, a homogeneous surface perpendicular to the z direction cannot change the transverse momentum. Since the propagation vector is proportional to the photon's momentum, the transverse propagation direction must remain the same in both regions. Assume without loss of generality a plane of incidence in the plane . Using the well known dependence of the wavenumber on the refractive index of the medium, we derive Snell's law immediately. where is the wavenumber in vacuum. Although no surface is truly homogeneous at the atomic scale, full translational symmetry is an excellent approximation whenever the region is homogeneous on the scale of the light wavelength. Vector form Given a normalized light vector (pointing from the light source toward the surface) and a normalized plane normal vector , one can work out the normalized reflected and refracted rays, via the cosines of the angle of incidence and angle of refraction , without explicitly using the sine values or any trigonometric functions or angles: Note: must be positive, which it will be if is the normal vector that points from the surface toward the side where the light is coming from, the region with index . If is negative, then points to the side without the light, so start over with replaced by its negative. This reflected direction vector points back toward the side of the surface where the light came from. Now apply Snell's law to the ratio of sines to derive the formula for the refracted ray's direction vector: The formula may appear simpler in terms of renamed simple values and , avoiding any appearance of trig function names or angle names: Example: The cosine values may be saved and used in the Fresnel equations for working out the intensity of the resulting rays. Total internal reflection is indicated by a negative radicand in the equation for , which can only happen for rays crossing into a less-dense medium (). Total internal reflection and critical angle When light travels from a medium with a higher refractive index to one with a lower refractive index, Snell's law seems to require in some cases (whenever the angle of incidence is large enough) that the sine of the angle of refraction be greater than one. This of course is impossible, and the light in such cases is completely reflected by the boundary, a phenomenon known as total internal reflection. The largest possible angle of incidence which still results in a refracted ray is called the critical angle; in this case the refracted ray travels along the boundary between the two media. For example, consider a ray of light moving from water to air with an angle of incidence of 50°. The refractive indices of water and air are approximately 1.333 and 1, respectively, so Snell's law gives us the relation which is impossible to satisfy. The critical angle θcrit is the value of θ1 for which θ2 equals 90°: Dispersion In many wave-propagation media, wave velocity changes with frequency or wavelength of the waves; this is true of light propagation in most transparent substances other than a vacuum. These media are called dispersive. The result is that the angles determined by Snell's law also depend on frequency or wavelength, so that a ray of mixed wavelengths, such as white light, will spread or disperse. Such dispersion of light in glass or water underlies the origin of rainbows and other optical phenomena, in which different wavelengths appear as different colors. In optical instruments, dispersion leads to chromatic aberration; a color-dependent blurring that sometimes is the resolution-limiting effect. This was especially true in refracting telescopes, before the invention of achromatic objective lenses. Lossy, absorbing, or conducting media In a conducting medium, permittivity and index of refraction are complex-valued. Consequently, so are the angle of refraction and the wave-vector. This implies that, while the surfaces of constant real phase are planes whose normals make an angle equal to the angle of refraction with the interface normal, the surfaces of constant amplitude, in contrast, are planes parallel to the interface itself. Since these two planes do not in general coincide with each other, the wave is said to be inhomogeneous. The refracted wave is exponentially attenuated, with exponent proportional to the imaginary component of the index of refraction.
Physical sciences
Optics
Physics
1490711
https://en.wikipedia.org/wiki/Opah
Opah
Opahs, also commonly known as moonfish, sunfish, cowfish (not to be confused with Molidae), kingfish, and redfin ocean pan are large, colorful, deep-bodied pelagic lampriform fishes comprising the small family Lampridae (also spelled Lamprididae). The family comprises two genera: Lampris () and the monotypic Megalampris (known only from fossil remains). The extinct family, Turkmenidae, from the Paleogene of Central Asia, is closely related, though much smaller. In 2015, Lampris guttatus was discovered to have near-whole-body endothermy in which the entire core of the body is maintained at around 5 °C above the surrounding water. This is unique among fish as most fish are entirely cold blooded or are capable of warming only some parts of their bodies. Species Two living species were traditionally recognized, but a taxonomic review in 2018 found that more should be recognized (the result of splitting L. guttatus into several species, each with a more restricted geographic range), bringing the total to six. The six species of Lampris have mostly non-overlapping geographical ranges, and can be recognized based on body shape and coloration pattern. Lampris australensis Underkoffler, Luers, Hyde & Craig, 2018 Southern spotted opah – Southern hemisphere, in the Pacific and Indian oceans. Lampris guttatus (Brünnich, 1788) North Atlantic opah – formerly thought to be cosmopolitan, but now thought to be restricted to the northeastern Atlantic including the Mediterranean Sea. Lampris immaculatus Gilchrist, 1904 southern opah – confined to the Southern Ocean from 34° S to the Antarctic Polar Front. Lampris incognitus Underkoffler, Luers, Hyde & Craig, 2018 smalleye Pacific opah – central and eastern North Pacific Ocean. Lampris lauta Lowe, 1860 East Atlantic opah – eastern Atlantic Ocean, including the Mediterranean, Azores and Canary Islands. Lampris megalopsis Underkoffler, Luers, Hyde & Craig, 2018 bigeye Pacific opah – cosmopolitan, including the Gulf of Mexico, Indian Ocean, the western Pacific Ocean and Chile. Extinct species † Lampris zatima, also known as "Diatomœca zatima", is a very small, extinct species from the late Miocene of what is now Southern California known primarily from fragments, and the occasional headless specimens. † Megalampris keyesi is an extinct species estimated to be about 4 m in length. Fossil remains date back to the late Oligocene of what is now New Zealand, and it is the first fossil lampridiform found in the Southern Hemisphere. Description Opahs are deeply keeled, compressed, discoid fish with conspicuous coloration: the body is a deep red-orange grading to rosy on the belly, with white spots covering the flanks. Both the median and paired fins are a bright vermilion. The large eyes stand out, as well, ringed with golden yellow. The body is covered in minute cycloid scales and its silvery, iridescent guanine coating is easily abraded. Opahs closely resemble in shape the unrelated butterfish (family Stromateidae). Both have falcated (curved) pectoral fins and forked, emarginated (notched) caudal fins. Aside from being significantly larger than butterfish, opahs have enlarged, falcated pelvic fins with about 14 to 17 rays, which distinguish them from superficially similar carangids—positioned thoracically; adult butterfish lack pelvic fins. The pectorals of opahs are also inserted (more or less) horizontally rather than vertically. The anterior portion of an opah's single dorsal fin (with about 50–55 rays) is greatly elongated, also in a falcated profile similar to the pelvic fins. The anal fin (around 34 to 41 rays) is about as high and as long as the shorter portion of the dorsal fin, and both fins have corresponding grooves into which they can be depressed. The snout is pointed and the mouth small, toothless, and terminal. The lateral line forms a high arch over the pectoral fins before sweeping down to the caudal peduncle. The larger species, Lampris guttatus, may reach a total length of and a weight of . The lesser-known Lampris immaculatus reaches a recorded total length of just . Endothermy The opah is the only fish known to exhibit whole body endothermy where all the internal organs are kept at a higher temperature than the surrounding water. This feature allows opahs to maintain an active lifestyle in the cold waters they inhabit. Unlike birds and mammals, the opah is not a homeotherm despite being an endotherm: while its body temperature is raised above the surrounding water temperature, it still varies with the external temperature and is not held constant. In addition to whole body endothermy, the opah also exhibits regional endothermy by raising the temperature of its brain and eyes above that of the rest of the body. Regional endothermy also arose by convergent evolution in tuna, lamnid sharks and billfishes where the swimming muscles and cranial organs are maintained at an elevated temperature compared with the surrounding water. The large muscles powering the pectoral fins generate most of the heat in the opah. In addition to the heat they generate while moving, these muscles have special regions that can generate additional heat without contracting. The opah has a thick layer of fat that insulates its internal organs and cranium from the surrounding water. However, fat alone is insufficient to retain heat within a fish's body. The gills are the main point of heat loss in fishes as this is where blood from the entire body must continuously be brought in close contact with the surrounding water. Opahs prevent heat loss through their gills using a special structure in the gill blood vessels called the rete mirabile. The rete mirabile is a dense network of blood vessels where the warm blood flowing from the heart to the gills transfers its heat to the cold blood returning from the gills. Hence, the rete mirabile prevents warm blood from coming in contact with the cold water (and losing its heat) and also ensures that the blood returning to the internal organs is warmed up to body temperature. Within the rete, the warm and cold blood flow past each other in opposite directions through thin vessels to maximise the heat transferred. This mechanism is called a counter-current heat exchanger. In addition to the rete mirabile in its gills, the opah also has a rete in the blood supply to its brain and eyes. This helps to trap heat in the cranium and further raise its temperature above the rest of the body. While the rete mirabile in the gills is unique to the opah, the cranial rete mirabile has also evolved independently in other fishes. Unlike in billfish which have a specialised noncontractile tissue that functions as a brain heater, the opah cranium is heated by the contractions of the large eye muscles. Behavior Almost nothing is known of opah biology and ecology. They are presumed to live out their entire lives in the open ocean, at mesopelagic depths of 50 to 500 m, with possible forays into the bathypelagic zone. They are apparently solitary, but are known to school with tuna and other scombrids. The fish propel themselves by a lift-based labriform mode of swimming, that is, by flapping their pectoral fins. This, together with their forked caudal fins and depressible median fins, indicates they swim at constantly high speeds like tuna. Lampris guttatus are able to maintain their eyes and brain at 2 °C warmer than their bodies, a phenomenon called cranial endothermy and one they share with sharks in the family Lamnidae, billfishes, and some tunas. This may allow their eyes and brains to continue functioning during deep dives into water below 4 °C. Squid and euphausiids (krill) make up the bulk of the opah diet; small fish are also taken. Pop-up archival transmitting tagging operations have indicated that, aside from humans, large pelagic sharks, such as great white sharks and mako sharks, are primary predators of opah. The tetraphyllidean tapeworm, Pelichnibothrium speciosum, has been found in L. guttatus, which may be an intermediate or paratenic host. The planktonic opah larvae initially resemble those of certain ribbonfishes (Trachipteridae), but are distinguished by the former's lack of dorsal and pelvic fin ornamentation. The slender hatchlings later undergo a marked and rapid transformation from a slender to deep-bodied form; this transformation is complete by 10.6 mm standard length in L. guttatus. Opahs are believed to have a low population resilience.
Biology and health sciences
Acanthomorpha
Animals
1491594
https://en.wikipedia.org/wiki/Giant%20clam
Giant clam
Tridacna gigas, the giant clam, is the best-known species of the giant clam genus Tridacna. Giant clams are the largest living bivalve mollusks. Several other species of "giant clam" in the genus Tridacna are often misidentified as Tridacna gigas. These clams were known to indigenous peoples of East Asia for thousands of years and the Venetian scholar and explorer Antonio Pigafetta documented them in a journal as early as 1521. One of a number of large clam species native to the shallow coral reefs of the South Pacific and Indian oceans, they may weigh more than , measure as much as across, and have an average lifespan in the wild of more than 100 years. They also are found off the shores of the Philippines and in the South China Sea in the coral reefs of Malaysia. The giant clam lives in flat coral sand or broken coral and may be found at depths of as great as 20 m (66 ft). Its range covers the Indo-Pacific, but populations are diminishing quickly and the giant clam has become extinct in many areas where it was once common. The maxima clam has the largest geographical distribution among giant clam species; it may be found off high- or low-elevation islands, in lagoons or fringing reefs. Its rapid growth rate is likely due to its ability to cultivate algae in its body tissue. Although larval clams are planktonic, they become sessile in adulthood. The creature's mantle tissues act as a habitat for the symbiotic single-celled dinoflagellate algae (zooxanthellae) from which the adult clams get most of their nutrition. By day, the clam opens its shell and extends its mantle tissue so that the algae receive the sunlight they need to photosynthesise. This method of algal farming is under study as a model for highly efficient bioreactors. Anatomy Young T. gigas are difficult to distinguish from other species of Tridacninae. Adult T. gigas are the only giant clams unable to close their shells completely, allowing part of the brownish-yellow mantle to remain visible. Tridacna gigas has four or five vertical folds in its shell, which serves as the main characteristic differentiating it from the similar T. derasa that has six or seven vertical folds. Similar to coral matrices composed of calcium carbonate, giant clams grow their shells through the process of biomineralization, which is very sensitive to seasonal temperature. The isotopic ratio of oxygen in carbonate and the ratio between Strontium and Calcium together may be used to determine historical sea surface temperature. The mantle border itself is covered in several hundred to several thousand pinhole eyespots approximately in diameter. Each one consists of a small cavity containing a pupil-like aperture and a base of 100 or more photoreceptors sensitive to three different ranges of light, including UV, which may be unique among molluscs. These receptors allow T. gigas to partially close their shells in response to dimming of light, change in the direction of light, or the movement of an object. The optical system forms an image by sequential, local dimming of some eyes using pigment from the aperture. Largest specimens The largest known T. gigas specimen measured , and it weighed 230 kg (510 lb) dead and was estimated to be 250 kg (550 lb) alive. It was discovered around 1817 on the north western coast of Sumatra, Indonesia, and its shells are now on display in a museum in Northern Ireland. A heavier giant clam was found in 1956 off the Japanese island of Ishigaki. The shell's length was , and it weighed dead and estimated alive. Ecology Feeding Giant clams are filter-feeders, yet 65-70 percent of their nutritional needs are supplied by zooxanthellae. This enables giant clams to grow as large as one meter in length even in nutrient-poor coral-reef waters. The clams cultivate algae in a special circulatory system that enables them to keep a substantially higher number of symbionts per unit of volume. The mantle's edges are packed with symbiotic zooxanthellae, which presumably use carbon dioxide, phosphates, and nitrates supplied by the clam. In very small clams— dry tissue weight—filter feeding provides approximately 65% of total carbon needed for respiration and growth; comparatively larger clams () acquire only 34% of carbon from this source. A single species of zooxenthellae may be symbionts of both giant clams and nearby reef–building (hermatypic) corals. Reproduction Tridacna gigas reproduce sexually and are hermaphrodites (producing both eggs and sperm by one clam). While self-fertilization is not possible, having both characteristics does allow them to reproduce with any other member of the species as well as hermaphrodically. As with all other forms of sexual reproduction, hermaphroditism ensures that new gene combinations be passed to further generations. This flexibility in reproduction reduces the burden of finding a compatible mate, while simultaneously doubling the number of offspring produced. Since giant clams cannot move themselves, they adopt broadcast spawning, releasing sperm and eggs into the water. A transmitter substance called spawning induced substance (SIS) helps synchronize the release of sperm and eggs to ensure fertilization. The substance is released through a syphonal outlet. Other clams can detect SIS immediately. Incoming water passes chemoreceptors situated close to the incurrent syphon that transmit the information directly to the cerebral ganglia, a simple form of brain. Detection of SIS stimulates the giant clam to swell its mantle in the central region and to contract its adductor muscle. Each clam then fills its water chambers and closes the incurrent syphon. The shell contracts vigorously with the adductor's help, so the excurrent chamber's contents flows through the excurrent syphon. After a few contractions containing only water, eggs and sperm appear in the excurrent chamber and then pass through the excurrent syphon into the water. Female eggs have a diameter of . Egg release initiates the reproductive process. An adult T. gigas can release more than 500 million eggs at a time. Spawning seems to coincide with incoming tides near the second (full), third, and fourth (new) quarters of the moon phase. Spawning contractions occur every two or three minutes, with intense spawning ranging from thirty minutes to two and a half hours. Clams that do not respond to the spawning of neighboring clams may be reproductively inactive. Development The fertilized egg floats in the sea for approximately 12 hours until eventually a larva (trochophore) hatches. It then starts to produce a calcium carbonate shell. Two days after fertilization it measures . Soon it develops a "foot," which is used to move on the ground. Larvae also can swim to search for appropriate habitat. At roughly one week of age, the clam settles on the ground, although it changes location frequently within the first few weeks. The larva does not yet have symbiotic algae, so it depends completely on plankton. Also, free-floating zooxanthellae are captured while filtering food. Eventually the front adductor muscle disappears and the rear muscle moves into the clam's center. Many small clams die at this stage. The clam is considered a juvenile when it reaches a length of . It is difficult to observe the growth rate of T. gigas in the wild, but laboratory-reared giant clams have been observed to grow a year. The ability for Tridacna to grow to such large sizes with fleshy mantles that extend beyond the edges of their shells is considered to be the result of total reorganization of bivalve development and morphology. Historically, two evolutionary explanations have been suggested for this process. Sir Yonge suggested and maintained for many years that the visceral-pedal ganglia complex rotate 180 degrees relative to the shell, requiring that they develop and evolve independently. Stasek proposed instead that the growth occurs primarily in a posterior direction instead of the more typical direction of ventral in most bivalves, which is reflected in the transitional stages of alternative ways of growing that juveniles undergo. Human relevance The main reason that giant clams are becoming endangered is likely to be intensive exploitation by bivalve fishers. Mainly large adults are killed because they are the most profitable. The giant clam is considered a delicacy in Japan (known as himejako), France, Southeast Asia, and many Pacific Islands. Some Asian foods include the meat from the muscles of clams. Large amounts of money are paid for the adductor muscle, which Chinese people believe to have aphrodisiac powers. On the black market, giant clam shells are sold as decorative accoutrements. Legend As is often the case historically with uncharacteristically large species, the giant clam has been misunderstood. Even in countries where giant clams are easily seen, stories incorrectly depict giant clams as aggressive beings. For instance, although the clams are unable to close their shells completely, a Polynesian folk tale relates that a monkey's hand was bitten off by one, and even though once past larval stage, the clams are sessile, a Maori legend relates a supposed attack on a canoe by a giant clam. Starting from the eighteenth century, claims of danger had been related to the western world. In the 1920s, a reputable science magazine Popular Mechanics once claimed that the great mollusc had caused deaths. Versions of the U.S. Navy Diving Manual even gave detailed instructions for releasing oneself from its grasp by severing the adductor muscles used to close its shell. In an account of the discovery of the Pearl of Lao Tzu, Wilburn Cobb said he was told that a Dyak diver was drowned when the Tridacna closed its shell on his arm. In reality, the slow speed of their abductor muscle contraction and the need to force water out of their shells while closing, prevents them from trapping a human. Other myths focus on the huge size of giant clams being associated with long age. While giant clams do live a long time and may serve as a bio-metric for historic climatic conditions, their large size is more likely associated with rapid growth. Aquaculture Mass culture of giant clams began at the Micronesian Mariculture Demonstration Center in Palau (Belau). A large Australian government-funded project from 1985 to 1992 mass-cultured giant clams, particularly T. gigas at James Cook University's Orpheus Island Research Station, and supported the development of hatcheries in the Pacific Islands and the Philippines. Seven of the ten known species of giant clams in the world are found in the coral reefs of the South China Sea. Conservation status There is concern among conservationists about whether those who use the species as a source of livelihood are overexploiting it. The numbers in the wild have been greatly reduced by extensive harvesting for food and the aquarium trade. The species is listed in Appendix II of the Convention on International Trade in Endangered Species (CITES) meaning international trade (including in parts and derivatives) is regulated. T. gigas has been reported as locally extinct in peninsular Malaysia, while T. derasa and Hippopus porcellanus are restricted to Eastern Malaysia. These recent local extinctions have motivated the introduction of giant clams to Hawaii and Micronesia following maricultural advancements. Restocked individuals in the Philippines have successfully dispersed their own spawned larvae to at least several hundred meters away after only ten years.
Biology and health sciences
Bivalvia
Animals
1491814
https://en.wikipedia.org/wiki/Electric%20stove
Electric stove
An electric stove, electric cooker or electric range is a stove with an integrated electrical heating device to cook and bake. Electric stoves became popular as replacements for solid-fuel (wood or coal) stoves which required more labor to operate and maintain. Some modern stoves come in a unit with built-in extractor hoods. The stove's one or more "burners" (heating elements) may be controlled by a rotary switch with a finite number of positions; or may have an "infinite switch" called a simmerstat that allows constant variability between minimum and maximum heat settings. Some stove burners and controls incorporate thermostats. History Early patents On September 20, 1859, George B. Simpson was awarded US patent #25532 for an 'electro-heater' surface heated by a platinum-wire coil powered by batteries. In his words, useful to "warm rooms, boil water, cook victuals...". Canadian inventor Thomas Ahearn filed patent #39916 in 1892 for an "Electric Oven," a device he probably employed in preparing a meal for an Ottawa hotel that year. Ahearn and Warren Y. Soper were owners of Ottawa's Chaudiere Electric Light and Power Company. The electric stove was showcased at the Chicago World's Fair in 1893, where an electrified model kitchen was shown. Unlike the gas stove, the electrical stove was slow to catch on, partly due to the unfamiliar technology, and the need for cities and towns to be electrified. In 1897, William Hadaway was granted US patent # 574537 for an "Automatically Controlled Electric Oven". Kalgoorlie Stove In November 1905, David Curle Smith, the Municipal Electrical Engineer of Kalgoorlie, Western Australia, applied for a patent (Aust Patent No 4699/05) for a device that adopted (following the design of gas stoves) what later became the configuration for most electric stoves: an oven surmounted by a hotplate with a grill tray between them. Curle Smith's stove did not have a thermostat; heat was controlled by the number of the appliance's nine elements that were switched on. After the patent was granted in 1906, manufacturing of Curle Smith's design commenced in October that year. The entire production run was acquired by the electricity supply department of Kalgoorlie Municipality, which hired out the stoves to residents. About 50 appliances were produced before cost overruns became a factor in Council politics and the project was suspended. This was the first time household electric stoves were produced with the express purpose of bringing "cooking by electricity ... within the reach of anyone". There are no extant examples of this stove, many of which were salvaged for their copper content during World War I. To promote the stove, David Curle Smith's wife, H. Nora Curle Smith (née Helen Nora Murdoch, and a member of the Murdoch family prominent in Australian public life), wrote a cookbook containing operating instructions and 161 recipes. Thermo-Electrical Cooking Made Easy, published in March 1907, is therefore the world's first cookbook for electric stoves. Since 1908 Three companies, in the United States, began selling electric stoves in 1908. However, sales and public acceptance were slow to develop. Early electric stoves were unsatisfactory due to the cost of electricity (compared with wood, coal, or city gas), limited power available from the electrical supply company, poor temperature regulation, and short life of heating elements. The invention of nichrome alloy for resistance wires improved the cost and durability of heating elements. As late as the 1920s, an electric stove was still considered a novelty. By the 1930s, the maturing of the technology, the decreased cost of electric power and modernized styling of electric stoves had greatly increased their acceptance. The electrical stove slowly began to replace the gas stove, especially in household kitchens. Electric stoves and other household appliances were marketed by electrical utilities to build demand for electric power. During the expansion of rural electrification, demonstrations of cooking on an electric stove were popular. Variants Early electric stoves had resistive heating coils which heated iron hotplates, on top of which the pots were placed. Eventually, composite heating elements were introduced, with the resistive wires encased in hollow metal tubes packed with magnesite. These tubes, arranged in a spiral, support the cookware directly. In the 1970s, glass-ceramic cooktops started to appear. Glass-ceramic has very low thermal conductivity and a near-zero coefficient of thermal expansion, but lets infrared radiation pass very well. Electrical heating coils or halogen lamps are used as heating elements. Because of its physical characteristics, A third technology is the induction stove, which also has a smooth glass-ceramic surface. Only ferromagnetic cookware works with induction stoves, which heat by dint of electromagnetic induction. Electricity consumption Typical electricity consumption of one heating element depending on size is 1–3 kW.
Technology
Household appliances
null
1493798
https://en.wikipedia.org/wiki/Eastern%20mole
Eastern mole
The eastern mole or common mole (Scalopus aquaticus) is a medium-sized North American mole. It is the only species in the genus Scalopus. It is found in forested and open areas with moist sandy soils in northern Mexico, the eastern United States and the southwestern corner of Ontario in Canada. The eastern mole has grey-brown fur with silver-grey underparts, a pointed nose and a short tail. It is about in length including a long tail and weighs about . Its front paws are broad and spade-shaped, specialized for digging. It has 36 teeth. Its eyes are covered by fur and its ears are not visible. The eastern mole spends most of its time underground, foraging in shallow burrows for earthworms, grubs, beetles, insect larvae and some plant matter. It is active year-round. It is mainly solitary except during mating in early spring. The female has a litter of two to five young in a deep burrow. Subspecies A majority of the moles throughout their range are Scalopus aquaticus aquaticus. All the other subspecies exist in small pocket ranges.
Biology and health sciences
Eulipotyphla
Animals
1494235
https://en.wikipedia.org/wiki/Glacial%20landform
Glacial landform
Glacial landforms are landforms created by the action of glaciers. Most of today's glacial landforms were created by the movement of large ice sheets during the Quaternary glaciations. Some areas, like Fennoscandia and the southern Andes, have extensive occurrences of glacial landforms; other areas, such as the Sahara, display rare and very old fossil glacial landforms. Erosional landforms As the glaciers expand, due to their accumulating weight of snow and ice they crush, abrade, and scour surfaces such as rocks and bedrock. The resulting erosional landforms include striations, cirques, glacial horns, arêtes, trim lines, U-shaped valleys, roches moutonnées, overdeepenings and hanging valleys. Striations: grooves and indentations in rock outcrops, formed by the scraping of small sediments on the bottom of a glacier across the Earth's surface. The direction of striations display the direction the glacier was moving. Cirque: Starting location for mountain glaciers, leaving behind a bowl shaped indentation in the mountain side once the small glacier has melted.(add geology book citation already in the article) Cirque stairway: a sequence of cirques U-shaped, or trough, valley: U-shaped valleys are created by mountain glaciers. When filled with ocean water so as to create an inlet, these valleys are called fjords. Arête: spiky high land between two glaciers. If the glacial action erodes through, a spillway (or col) forms Horn: a sharp peak connecting multiple glacier intersections, made up of multiple arêtes. Valley step: an abrupt change in the longitudinal slope of a glacial valley Hanging Valleys: Formed by glacial meltwater eroding the land partially, often accompanied by a waterfall. Roche moutonnée Nunatak Depositional landforms Later, when the glaciers retreated leaving behind their freight of crushed rock and sand (glacial drift), they created characteristic depositional landforms. Depositional landforms are often made of glacial till, which is composed of unsorted sediments (some quite large, others small) that were eroded, carried, and deposited by the glacier some distance away from their original rock source. Examples include glacial moraines, eskers, and kames. Drumlins and ribbed moraines are also landforms left behind by retreating glaciers. Many depositional landforms result from sediment deposited or reshaped by meltwater and are referred to as fluvioglacial landforms. Fluvioglacial deposits differ from glacial till in that they were deposited by means of water, rather than the glacial itself, and the sediments are thus also more size sorted than glacial till is. The stone walls of New England contain many glacial erratics, rocks that were dragged by a glacier many miles from their bedrock origin. Esker: Built up bed of a subglacial stream, forming small, string-like mounds left behind as a glacier retreats. Kame: Irregularly shaped mound of sediments previously deposited by falling into an opening of glacial ice. Moraine: Built up mound of glacial till along a spot on the glacier. Feature can be terminal (at the end of a glacier, showing how far the glacier extended), lateral (along the sides of a glacier), or medial (formed by the merger of lateral moraines from contributory glaciers). Types: Pulju, Rogen, Sevetti, terminal, Veiki Outwash fan: Braided stream flowing from the front end of a glacier into a more flat, lower elevation plain of sediments. Glacial lakes and ponds Lakes and ponds may also be caused by glacial movement. Kettle lakes form when a retreating glacier leaves behind an underground or surface chunk of ice that later melts to form a depression containing water. Moraine-dammed lakes occur when glacial debris dam a stream (or snow runoff). Jackson Lake and Jenny Lake in Grand Teton National Park are examples of moraine-dammed lakes, though Jackson Lake is enhanced by a man-made dam. Kettle lake: Depression, formed by a block of ice separated from the main glacier, in which the lake forms Tarn: A lake formed in a cirque by overdeepening Paternoster lake: A series of lakes in a glacial valley, formed when a stream is dammed by successive recessional moraines left by an advancing or retreating glacier Glacial lake: A lake that formed between the front of a glacier and the last recessional moraine Ice features Apart from the landforms left behind by glaciers, glaciers themselves are striking features of the terrain, particularly in the polar regions of Earth. Notable examples include valley glaciers where glacial flow is restricted by the valley walls, crevasses in the upper section of glacial ice, and icefalls—the ice equivalent of waterfalls. Disputed origin The glacial origin of some landforms has been questioned: Erling Lindström has advanced the thesis that roches moutonnées may not be entirely glacial landforms, and may have already had most of their shape before glaciation. Jointing that contributes to their shape typically predates glaciation, and roche moutonnée-like forms can be found in tropical areas such as East Africa and Australia. Further, at Ivö Lake in Sweden, rock surfaces exposed by kaolin mining and then weathered resemble roche moutonnée. The idea of elevated flat surfaces being shaped by glaciation—the glacial buzzsaw effect—has been rejected by various scholars. In the case of Norway the elevated paleic surface has been proposed to have been shaped by the glacial buzzsaw effect. However, this proposal is difficult to reconcile with the fact that the paleic surfaces consist of a series of steps at different levels. Glacial cirques, that in the buzzsaw hypothesis contribute to leveling the landscape, are not associated with any paleosurface levels of the composite paleic surface, nor does the modern equilibrium line altitude (ELA) or the Last Glacial Maximum ELA match any given level of the paleic surface. The elevated plains of West Greenland are also unrelated to any glacial buzzsaw effect. The Gulf of Bothnia and Hudson Bay, two large depressions at the centre of former ice sheets, are known to be more the result of tectonics than of any weak glacial erosion.
Physical sciences
Glacial landforms
null
1494648
https://en.wikipedia.org/wiki/Google%20Maps
Google Maps
Google Maps is a web mapping platform and consumer application offered by Google. It offers satellite imagery, aerial photography, street maps, 360° interactive panoramic views of streets (Street View), real-time traffic conditions, and route planning for traveling by foot, car, bike, air (in beta) and public transportation. , Google Maps was being used by over one billion people every month around the world. Google Maps began as a C++ desktop program developed by brothers Lars and Jens Rasmussen in Australia at Where 2 Technologies. In October 2004, the company was acquired by Google, which converted it into a web application. After additional acquisitions of a geospatial data visualization company and a real-time traffic analyzer, Google Maps was launched in February 2005. The service's front end utilizes JavaScript, XML, and Ajax. Google Maps offers an API that allows maps to be embedded on third-party websites, and offers a locator for businesses and other organizations in numerous countries around the world. Google Map Maker allowed users to collaboratively expand and update the service's mapping worldwide but was discontinued from March 2017. However, crowdsourced contributions to Google Maps were not discontinued as the company announced those features would be transferred to the Google Local Guides program, although users that are not Local Guides can still contribute. Google Maps' satellite view is a "top-down" or bird's-eye view; most of the high-resolution imagery of cities is aerial photography taken from aircraft flying at , while most other imagery is from satellites. Much of the available satellite imagery is no more than three years old and is updated on a regular basis, according to a 2011 report. Google Maps previously used a variant of the Mercator projection, and therefore could not accurately show areas around the poles. In August 2018, the desktop version of Google Maps was updated to show a 3D globe. It is still possible to switch back to the 2D map in the settings. Google Maps for mobile devices were first released in 2006; the latest versions feature GPS turn-by-turn navigation along with dedicated parking assistance features. By 2013, it was found to be the world's most popular smartphone app, with over 54% of global smartphone owners using it. In 2017, the app was reported to have two billion users on Android, along with several other Google services including YouTube, Chrome, Gmail, Search, and Google Play. History Acquisitions Google Maps first started as a C++ program designed by two Danish brothers, Lars and Jens Eilstrup Rasmussen, and Noel Gordon and Stephen Ma, at the Sydney-based company Where 2 Technologies, which was founded in early 2003. The program was initially designed to be separately downloaded by users, but the company later pitched the idea for a purely Web-based product to Google management, changing the method of distribution. In October 2004, the company was acquired by Google Inc. where it transformed into the web application Google Maps. The Rasmussen brothers, Gordon and Ma joined Google at that time. In the same month, Google acquired Keyhole, a geospatial data visualization company (with investment from the CIA), whose marquee application suite, Earth Viewer, emerged as the Google Earth application in 2005 while other aspects of its core technology were integrated into Google Maps. In September 2004, Google acquired ZipDash, a company that provided real-time traffic analysis. 2005–2010 The launch of Google Maps was first announced on the Google Blog on February 8, 2005. In September 2005, in the aftermath of Hurricane Katrina, Google Maps quickly updated its satellite imagery of New Orleans to allow users to view the extent of the flooding in various parts of that city. As of 2007, Google Maps was equipped with a miniature view with a draggable rectangle that denotes the area shown in the main viewport, and "Info windows" for previewing details about locations on maps. As of 2024, this feature had been removed (likely several years prior). On November 28, 2007, Google Maps for Mobile 2.0 was released. It featured a beta version of a "My Location" feature, which uses the GPS / Assisted GPS location of the mobile device, if available, supplemented by determining the nearest wireless networks and cell sites. The software looks up the location of the cell site using a database of known wireless networks and sites. By triangulating the different signal strengths from cell transmitters and then using their location property (retrieved from the database), My Location determines the user's current location. On September 23, 2008, coinciding with the announcement of the first commercial Android device, Google announced that a Google Maps app had been released for its Android operating system. In October 2009, Google replaced Tele Atlas as their primary supplier of geospatial data in the US version of Maps and used their own data. 2011–2015 On April 19, 2011, Map Maker was added to the American version of Google Maps, allowing any viewer to edit and add changes to Google Maps. This provides Google with local map updates almost in real-time instead of waiting for digital map data companies to release more infrequent updates. On January 31, 2012, Google, due to offering its Maps for free, was found guilty of abusing the dominant position of its Google Maps application and ordered by a court to pay a fine and damages to Bottin Cartographer, a French mapping company. This ruling was overturned on appeal. In June 2012, Google started mapping the UK's rivers and canals in partnership with the Canal and River Trust. The company has stated that "it would update the program during the year to allow users to plan trips which include locks, bridges and towpaths along the 2,000 miles of river paths in the UK." In December 2012, the Google Maps application was separately made available in the App Store, after Apple removed it from its default installation of the mobile operating system version iOS 6 in September 2012. On January 29, 2013, Google Maps was updated to include a map of North Korea. , Google Maps recognizes Palestine as a country, instead of redirecting to the Palestinian territories. In August 2013, Google Maps removed the Wikipedia Layer, which provided links to Wikipedia content about locations shown in Google Maps using Wikipedia geocodes. On April 12, 2014, Google Maps was updated to reflect the annexation of Ukrainian Crimea by Russia. Crimea is shown as the Republic of Crimea in Russia and as the Autonomous Republic of Crimea in Ukraine. All other versions show a dotted disputed border. In April 2015, on a map near the Pakistani city of Rawalpindi, the imagery of the Android logo urinating on the Apple logo was added via Map Maker and appeared on Google Maps. The vandalism was soon removed and Google publicly apologized. However, as a result, Google disabled user moderation on Map Maker, and on May 12, disabled editing worldwide until it could devise a new policy for approving edits and avoiding vandalism. On April 29, 2015, users of the classic Google Maps were forwarded to the new Google Maps with the option to be removed from the interface. On July 14, 2015, the Chinese name for Scarborough Shoal was removed after a petition from the Philippines was posted on Change.org. 2016–2018 On June 27, 2016, Google rolled out new satellite imagery worldwide sourced from Landsat 8, comprising over 700 trillion pixels of new data. In September 2016, Google Maps acquired mapping analytics startup Urban Engines. In 2016, the Government of South Korea offered Google conditional access to the country's geographic database – access that already allows indigenous Korean mapping providers high-detail maps. Google declined the offer, as it was unwilling to accept restrictions on reducing the quality around locations the South Korean Government felt were sensitive (see restrictions on geographic data in South Korea). On October 16, 2017, Google Maps was updated with accessible imagery of several planets and moons such as Titan, Mercury, and Venus, as well as direct access to imagery of the Moon and Mars. In May 2018, Google announced major changes to the API structure starting June 11, 2018. This change consolidated the 18 different endpoints into three services and merged the basic and premium plans into one pay-as-you-go plan. This meant a 1400% price raise for users on the basic plan, with only six weeks of notice. This caused a harsh reaction within the developers community. In June, Google postponed the change date to July 16, 2018. In August 2018, Google Maps designed its overall view (when zoomed out completely) into a 3D globe dropping the Mercator projection that projected the planet onto a flat surface. 2019–present In January 2019, Google Maps added speed trap and speed camera alerts as reported by other users. On October 17, 2019, Google Maps was updated to include incident reporting, resembling a functionality in Waze which was acquired by Google in 2013. In December 2019, Incognito mode was added, allowing users to enter destinations without saving entries to their Google accounts. In February 2020, Maps received a 15th anniversary redesign. It notably added a brand-new app icon, which now resembles the original icon in 2005. On September 23, 2020, Google announced a COVID-19 Layer update for Google maps, which is designed to offer a seven-day average data of the total COVID-19-positive cases per 100,000 people in the area selected on the map. It also features a label indicating the rise and fall in the number of cases. In January 2021, Google announced that it would be launching a new feature displaying COVID-19 vaccination sites. In January 2021, Google announced updates to the route planner that would accommodate drivers of electric vehicles. Routing would take into account the type of vehicle, vehicle status including current charge, and the locations of charging stations. In June 2022, Google Maps added a layer displaying air quality for certain countries. In September 2022, Google removed the COVID-19 Layer from Google Maps due to lack of usage of the feature. Functionality Directions and transit Google Maps provides a route planner, allowing users to find available directions through driving, public transportation, walking, or biking. Google has partnered globally with over 800 public transportation providers to adopt GTFS (General Transit Feed Specification), making the data available to third parties. The app can indicate users' transit route, thanks to an October 2019 update. The incognito mode, eyes-free walking navigation features were released earlier. A July 2020 update provided bike share routes. In February 2024, Google Maps started rolling out glanceable directions for its Android and iOS apps. The feature allows users to track their journey from their device's lock screen. Traffic conditions In 2007, Google began offering traffic data as a colored overlay on top of roads and motorways to represent the speed of vehicles on particular roads. Crowdsourcing is used to obtain the GPS-determined locations of a large number of cellphone users, from which live traffic maps are produced. Google has stated that the speed and location information it collects to calculate traffic conditions is anonymous. Options available in each phone's settings allow users not to share information about their location with Google Maps. Google stated, "Once you disable or opt out of My Location, Maps will not continue to send radio information back to Google servers to determine your handset's approximate location". Street View On May 25, 2007, Google released Google Street View, a feature of Google Maps providing 360° panoramic street-level views of various locations. On the date of release, the feature only included five cities in the U.S. It has since expanded to thousands of locations around the world. In July 2009, Google began mapping college campuses and surrounding paths and trails. Street View garnered much controversy after its release because of privacy concerns about the uncensored nature of the panoramic photographs, although the views are only taken on public streets. Since then, Google has blurred faces and license plates through automated facial recognition. In late 2014, Google launched Google Underwater Street View, including of the Australian Great Barrier Reef in 3D. The images are taken by special cameras which turn 360 degrees and take shots every 3 seconds. In 2017, in both Google Maps and Google Earth, Street View navigation of the International Space Station interior spaces became available. 3D imagery Google Maps has incorporated 3D models of hundreds of cities in over 40 countries from Google Earth into its satellite view. The models were developed using aerial photogrammetry techniques. Immersive View At the I/O 2022 event, Google announced Immersive View, a feature of Google Maps which would involve composite 3D images generated from Street View and aerial images of locations using AI, complete with synchronous information. It was to be initially in five cities worldwide, with plans to add it to other cities later on. The feature was previewed in September 2022 with 250 photorealistic aerial 3D images of landmarks, and was full launched in February 2023. An expansion of Immersive View to routes was announced at Google I/O 2023, and was launched in October 2023 for 15 cities globally. The feature uses predictive modelling and neural radiance fields to scan Street View and aerial images to generate composite 3D imagery of locations, including both exteriors and interiors, and routes, including driving, walking or cycling, as well as generate synchronous information and forecasts up to a month ahead from historical and environmental data about both such as weather, traffic and busyness. Immersive View has been available in the following locations: Landmark Icons Google added icons of city attractions, in a similar style to Apple Maps, on October 3, 2019. In the first stage, such icons were added to 9 cities. 45° imagery In December 2009, Google introduced a new view consisting of 45° angle aerial imagery, offering a "bird's-eye view" of cities. The first cities available were San Jose and San Diego. This feature was initially available only to developers via the Google Maps API. In February 2010, it was introduced as an experimental feature in Google Maps Labs. In July 2010, 45° imagery was made available in Google Maps in select cities in South Africa, the United States, Germany and Italy. Weather In February 2024, Google Maps incorporated a small weather icon on the top left corner of the Android and iOS mobile apps, giving access to weather and air quality index details. Lens in Maps Previously called Search with Live View, Lens In Maps identifies shops, restaurants, transit stations and other street features with a phone's camera and places relevant information and a category pin on top, like closing/opening times, current busyness, pricing and reviews using AI and augmented reality. The feature, if available on the device, can be accessed through tapping the Lens icon in the search bar. It was expanded to 50 new cities in October 2023 in its biggest expansion yet, after initially being released in late 2022 in Los Angeles, San Francisco, New York, London, and Paris. Lens in Maps shares features with Live View, which also displays information relating to street features while guiding a user to a selected destination with virtual arrows, signs and guidance. Business listings Google collates business listings from multiple on-line and off-line sources. To reduce duplication in the index, Google's algorithm combines listings automatically based on address, phone number, or geocode, but sometimes information for separate businesses will be inadvertently merged with each other, resulting in listings inaccurately incorporating elements from multiple businesses. Google allows business owners to create and verify their own business data through Google Business Profile (GBP), formerly Google My Business (GMB). Owners are encouraged to provide Google with business information including address, phone number, business category, and photos. Google has staff in India who check and correct listings remotely as well as support businesses with issues. Google also has teams on the ground in most countries that validate physical addresses in person. In May 2024, Google announced it would discontinue the chat feature in Google Business Profile. Starting July 15, 2024, new chat conversations would be disabled, and by July 31, 2024, all chat functionalities would end. Google Maps can be manipulated by businesses that are not physically located in the area in which they record a listing. There are cases of people abusing Google Maps to overtake their competition by placing unverified listings on online directory sites, knowing the information will roll across to Google (duplicate sites). The people who update these listings do not use a registered business name. They place keywords and location details on their Google Maps business title, which can overtake credible business listings. In Australia in particular, genuine companies and businesses are noticing a trend of fake business listings in a variety of industries. Genuine business owners can also optimize their business listings to gain greater visibility in Google Maps, through a type of search engine marketing called local search engine optimization. Indoor maps In March 2011, indoor maps were added to Google Maps, giving users the ability to navigate themselves within buildings such as airports, museums, shopping malls, big-box stores, universities, transit stations, and other public spaces (including underground facilities). Google encourages owners of public facilities to submit floor plans of their buildings in order to add them to the service. Map users can view different floors of a building or subway station by clicking on a level selector that is displayed near any structures which are mapped on multiple levels. My Maps My Maps is a feature in Google Maps launched in April 2007 that enables users to create custom maps for personal use or sharing. Users can add points, lines, shapes, notes and images on top of Google Maps using a WYSIWYG editor. An Android app for My Maps, initially released in March 2013 under the name Google Maps Engine Lite, was available until its removal from the Play Store in October 2021. Google Local Guides Google Local Guides is a volunteer program launched by Google Maps to enable users to contribute to Google Maps when registered. It sometimes provides them additional perks and benefits for their collaboration. Users can achieve Level 1 to 10, and be awarded with badges. The program is partially a successor to Google Map Maker as features from the former program became integrated into the website and app. The program consists of adding reviews, photos, basic information, and videos; and correcting information such as wheelchair accessibility. Adding reviews, photos, videos, new places, new roads or providing useful information gives points to the users. The level of users is upgraded when they get a certain amount of points. Starting with Level 4, a star is shown near the avatar of the user. Timelapse Earth Timelapse, released in April 2021, is a program in which users can see how the earth has been changed in the last 37 years. They combined the 15 million satellite images (roughly ten quadrillion pixels) to create the 35 global cloud-free Images for this program. Timeline If a user shares their location with Google, Timeline summarises this location for each day on a Timeline map. Timeline estimates the mode of travel used to move between places and will also show photos taken at that location. In June 2024, Google started progressively removing access to the timeline on web browsers, with the information instead being stored on a local device. Implementation As the user drags the map, the grid squares are downloaded from the server and inserted into the page. When a user searches for a business, the results are downloaded in the background for insertion into the side panel and map; the page is not reloaded. A hidden iframe with form submission is used because it preserves browser history. Like many other Google web applications, Google Maps uses JavaScript extensively. The site also uses protocol buffers for data transfer rather than JSON, for performance reasons. The version of Google Street View for classic Google Maps required Adobe Flash. In October 2011, Google announced MapsGL, a WebGL version of Maps with better renderings and smoother transitions. Indoor maps use JPG, .PNG, .PDF, .BMP, or .GIF, for floor plans. Users who are logged into a Google Account can save locations so that they are overlaid on the map with various colored "pins" whenever they browse the application. These "Saved places" can be organized into default groups or user named groups and shared with other users. "Starred places" is one default group example. It previously automatically created a record within the now-discontinued product Google Bookmarks. Map data and imagery The Google Maps terms and conditions state that usage of material from Google Maps is regulated by Google Terms of Service and some additional restrictions. Google has either purchased local map data from established companies, or has entered into lease agreements to use copyrighted map data. The owner of the copyright is listed at the bottom of zoomed maps. For example, street maps in Japan are leased from Zenrin. Street maps in China are leased from AutoNavi. Russian street maps are leased from Geocentre Consulting and Tele Atlas. Data for North Korea is sourced from the companion project Google Map Maker. Street map overlays, in some areas, may not match up precisely with the corresponding satellite images. The street data may be entirely erroneous, or simply out of date: "The biggest challenge is the currency of data, the authenticity of data," said Google Earth representative Brian McClendon. As a result, in March 2008 Google added a feature to edit the locations of houses and businesses. Restrictions have been placed on Google Maps through the apparent censoring of locations deemed potential security threats. In some cases the area of redaction is for specific buildings, but in other cases, such as Washington, D.C., the restriction is to use outdated imagery. Google Maps API Google Maps API, now called Google Maps Platform, hosts about 17 different APIs, which are themed under the following categories: Maps, Places and Routes. After the success of reverse-engineered mashups such as chicagocrime.org and housingmaps.com, Google launched the Google Maps API in June 2005 to allow developers to integrate Google Maps into their websites. It was a free service that did not require an API key until June 2018 (changes went into effect on July 16), when it was announced that an API key linked to a Google Cloud account with billing enabled would be required to access the API. The API does not contain ads, but Google states in their terms of use that they reserve the right to display ads in the future. By using the Google Maps API, it is possible to embed Google Maps into an external website, onto which site-specific data can be overlaid. Although initially only a JavaScript API, the Maps API was expanded to include an API for Adobe Flash applications (but this has been deprecated), a service for retrieving static map images, and web services for performing geocoding, generating driving directions, and obtaining elevation profiles. Over 1,000,000 web sites use the Google Maps API, making it the most heavily used web application development API. In September 2011, Google announced it would deprecate the Google Maps API for Flash. The Google Maps API was free for commercial use, provided that the site on which it is being used is publicly accessible and did not charge for access, and was not generating more than 25,000 map accesses a day. Sites that did not meet these requirements could purchase the Google Maps API for Business. As of June 21, 2018, Google increased the prices of the Maps API and requires a billing profile. Google Maps in China Due to restrictions on geographic data in China, Google Maps must partner with a Chinese digital map provider in order to legally show Chinese map data. Since 2006, this partner has been AutoNavi. Within China, the State Council mandates that all maps of China use the GCJ-02 coordinate system, which is offset from the WGS-84 system used in most of the world. google.cn/maps (formerly Google Ditu) uses the GCJ-02 system for both its street maps and satellite imagery. google.com/maps also uses GCJ-02 data for the street map, but uses WGS-84 coordinates for satellite imagery, causing the so-called China GPS shift problem. Frontier alignments also present some differences between google.cn/maps and google.com/maps. On the latter, sections of the Chinese border with India and Pakistan are shown with dotted lines, indicating areas or frontiers in dispute. However, google.cn shows the Chinese frontier strictly according to Chinese claims with no dotted lines indicating the border with India and Pakistan. For example, the South Tibet region claimed by China but administered by India as a large part of Arunachal Pradesh is shown inside the Chinese frontier by google.cn, with Indian highways ending abruptly at the Chinese claim line. Google.cn also shows Taiwan and the South China Sea Islands as part of China. Google Ditu's street map coverage of Taiwan no longer omits major state organs, such as the Presidential Palace, the five Yuans, and the Supreme Court. Feature-wise, google.cn/maps does not feature My Maps. On the other hand, while google.cn displays virtually all text in Chinese, google.com/maps displays most text (user-selectable real text as well as those on map) in English. This behavior of displaying English text is not consistent but intermittent – sometimes it is in English, sometimes it is in Chinese. The criteria for choosing which language is displayed are not known publicly. Criticism and controversies Incorrect location naming There are cases where Google Maps had added out-of-date neighborhood monikers. Thus, in Los Angeles, the name "Brooklyn Heights" was revived from its 1870s usage and "Silver Lake Heights" from its 1920s usage, or mistakenly renamed areas (in Detroit, the neighborhood "Fiskhorn" became "Fishkorn"). Because many companies utilize Google Maps data, these previously obscure or incorrect names then gain traction; the names are often used by realtors, hotels, food delivery sites, dating sites, and news organizations. Google has said it created its maps from third-party data, public sources, satellites, and users, but many names used have not been connected to any official record. According to a former Google Maps employee (who was not authorized to speak publicly), users can submit changes to Google Maps, but some submissions are ruled upon by people with little local knowledge of a place, such as contractors in India. Critics maintain that names likes "BoCoCa" (for the area in Brooklyn between Boerum Hill, Cobble Hill and Carroll Gardens), are "just plain puzzling" or simply made up. Some names used by Google have been traced to non-professionally made maps with typographical errors that survived on Google Maps. Potential misuse In 2005 the Australian Nuclear Science and Technology Organisation (ANSTO) complained about the potential for terrorists to use the satellite images in planning attacks, with specific reference to the Lucas Heights nuclear reactor; however, the Australian Federal government did not support the organization's concern. At the time of the ANSTO complaint, Google had colored over some areas for security (mostly in the U.S.), such as the rooftop of the White House and several other Washington, D.C. buildings. In October 2010, Nicaraguan military commander Edén Pastora stationed Nicaraguan troops on the Isla Calero (in the delta of the San Juan River), justifying his action on the border delineation given by Google Maps. Google has since updated its data which it found to be incorrect. On January 27, 2014, documents leaked by Edward Snowden revealed that the NSA and the GCHQ intercepted Google Maps queries made on smartphones, and used them to locate the users making these queries. One leaked document, dating to 2008, stated that "[i]t effectively means that anyone using Google Maps on a smartphone is working in support of a GCHQ system." In May 2015, searches on Google Maps for offensive racial epithets for African Americans such as "nigger", "nigger king", and "nigger house" pointed the user to the White House; Google apologized for the incident. In December 2015, 3 Japanese netizens were charged with vandalism after they were found to have added an unrelated law firm's name as well as indecent names to locations such as "Nuclear test site" to the Atomic Bomb Dome and "Izumo Satya" to the Izumo Taisha. In February 2020, the artist Simon Weckert used 99 cell phones to fake a Google Maps traffic jam. In September 2024, several schools in Taiwan and Hong Kong were altered to incorrect labels, such as "psychiatric hospitals" or "prisons". Initially, it was believed to be the result of hacker attacks. However, police later revealed that local students had carried out the prank. Google quickly corrected the mislabeled entries. Education officials in Taiwan and Hong Kong expressed concern over the incident. Misdirection incidents Australia In August 2023, a woman driving from Alice Springs to the Harts Range Racecourse was stranded in the Central Australian desert for a night after following directions provided by Google Maps. She later discovered that Google Maps was providing directions for the actual Harts Range instead of the rodeo. Google said it was looking into the naming of the two locations and consulting with "local and authoritative sources" to solve the issue. In February 2024, two German tourists were stranded for a week after Google Maps directed them to follow a dirt track through Oyala Thumotang National Park and their vehicle became trapped in mud. Queensland Parks and Wildlife Service ranger Roger James said, "People should not trust Google Maps when they're travelling in remote regions of Queensland, and they need to follow the signs, use official maps or other navigational devices." North America In June 2019, Google Maps provided nearly 100 Colorado drivers an alternative route that led to a dirt road after a crash occurred on Peña Boulevard. The road had been turned to mud by rain, resulting in nearly 100 vehicles being trapped. Google said in a statement, "While we always work to provide the best directions, issues can arise due to unforeseen circumstances such as weather. We encourage all drivers to follow local laws, stay attentive, and use their best judgment while driving." In September 2023, Google was sued by a North Carolina resident who alleged that Google Maps had directed her husband over the Snow Creek Bridge in Hickory the year prior, resulting in him drowning. According to the lawsuit, multiple people had notified Google about the state of the bridge, which collapsed in 2013, but Google had not updated the route information and continued to direct users over the bridge. At the time of the man's death, the barriers placed to block access to the bridge had been vandalized. In November 2023, a hiker was rescued by helicopter on the backside of Mount Fromme in Vancouver. North Shore Rescue stated on its Facebook page that the hiker had followed a non-existent hiking trail on Google Maps. This was also the second hiker in two months to require rescuing after following the same trail. The fake trail has since been removed from the app. Also in November 2023, Google apologized after users were directed through desert roads after parts of Interstate 15 were closed due to a dust storm. Drivers became stranded after following the suggested detour route, which was a "bumpy dirt trail". Following the incident, Google stated that Google Maps would "no longer route drivers traveling between Las Vegas and Barstow down through those roads." Russia In 2020, a teenage motorist was found frozen to death while his passenger was still alive but suffered from severe frostbite after using Google Maps, which had led them to a shorter but abandoned section of the R504 Kolyma Highway, where their Toyota Chaser became disabled. India In 2024, three men from Uttar Pradesh died after their car fell from an under-construction bridge. They were using Google Maps for driving which misdirected them and the car fell into the Ramganga river. Discontinued features Google Latitude Google Latitude was a feature that let users share their physical locations with other people. This service was based on Google Maps, specifically on mobile devices. There was an iGoogle widget for desktops and laptops as well. Some concerns were expressed about the privacy issues raised by the use of the service. On August 9, 2013, this service was discontinued, and on March 22, 2017, Google incorporated the features from Latitude into the Google Maps app. Google Map Maker In areas where Google Map Maker was available, for example, much of Asia, Africa, Latin America and Europe as well as the United States and Canada, anyone who logged into their Google account could directly improve the map by fixing incorrect driving directions, adding biking trails, or adding a missing building or road. General map errors in Australia, Austria, Belgium, Denmark, France, Liechtenstein, Netherlands, New Zealand, Norway, South Africa, Switzerland, and the United States could be reported using the Report a Problem link in Google Maps and would be updated by Google. For areas where Google used Tele Atlas data, map errors could be reported using Tele Atlas map insight. If imagery was missing, outdated, misaligned, or generally incorrect, one could notify Google through their contact request form. In November 2016, Google announced the discontinuation of Google Map Maker as of March 2017. Mobile app Google Maps is available as a mobile app for the Android and iOS mobile operating systems. The first mobile version of Google Maps (then known as Google Local for Mobile) was launched in beta in November 2005 for mobile platforms supporting J2ME. It was released as Google Maps for Mobile in 2006. In 2007 it came preloaded on the first iPhone in a deal with Apple. A version specifically for Windows Mobile was released in February 2007 and the Symbian app was released in November 2007. Version 2.0 of Google Maps Mobile was announced at the end of 2007, with a stand out My Location feature to find the user's location using the cell towers, without needing GPS. In September 2008, Google Maps was released for and preloaded on Google's own new platform Android. Up until iOS 6, the built-in maps application on the iOS operating system was powered by Google Maps. However, with the announcement of iOS 6 in June 2012, Apple announced that they had created their own Apple Maps mapping service, which officially replaced Google Maps when iOS 6 was released on September 19, 2012. However, at launch, Apple Maps received significant criticism from users due to inaccuracies, errors and bugs. One day later, The Guardian reported that Google was preparing its own Google Maps app, which was released on December 12, 2012. Within two days, the application had been downloaded over ten million times. Features The Google Maps apps for iOS and Android have many of the same features, including turn-by-turn navigation, street view, and public transit information. Turn-by-turn navigation was originally announced by Google as a separate beta testing app exclusive to Android 2.0 devices in October 2009. The original standalone iOS version did not support the iPad, but tablet support was added with version 2.0 in July 2013. An update in June 2012 for Android devices added support for offline access to downloaded maps of certain regions, a feature that was eventually released for iOS devices, and made more robust on Android, in May 2014. At the end of 2015 Google Maps announced its new offline functionality, but with various limitations – downloaded area cannot exceed 120,000 square kilometers and require a considerable amount of storage space. In January 2017, Google added a feature exclusively to Android that will, in some U.S. cities, indicate the level of difficulty in finding available parking spots, and on both Android and iOS, the app can, as of an April 2017 update, remember where users parked. In August 2017, Google Maps for Android was updated with new functionality to actively help the user in finding parking lots and garages close to a destination. In December 2017, Google added a new two-wheeler mode to its Android app, designed for users in India, allowing for more accessibility in traffic conditions. In 2019 the Android version introduced the new feature called live view that allows to view directions directly on the road thanks to augmented reality. Google Maps won the 2020 Webby Award for Best User Interface in the category Apps, Mobile & Voice. In March 2021, Google added a feature in which users can draw missing roads. In June 2022, Google implemented support for toll calculation. Both iOS and Android apps report how much the user has to pay in tolls when a route that includes toll roads is input. The feature is available for roads in the US, India, Japan and Indonesia with further expansion planned. As per reports the total number of toll roads covered in this phase is around 2000. Reception USA Today welcomed the application back to iOS, saying: "The reemergence in the middle of the night of a Google Maps app for the iPhone is like the return of an old friend. Only your friend, who'd gone missing for three months, comes back looking better than ever." Jason Parker of CNET, calling it "the king of maps", said, "With its iOS Maps app, Google sets the standard for what mobile navigation should be and more." Bree Fowler of the Associated Press compared Google's and Apple's map applications, saying: "The one clear advantage that Apple has is style. Like Apple devices, the maps are clean and clear and have a fun, pretty element to them, especially in 3-D. But when it comes down to depth and information, Google still reigns superior and will no doubt be welcomed back by its fans." Gizmodo gave it a ranking of 4.5 stars, stating: "Maps Done Right". According to The New York Times, Google "admits that it's [iOS app is] even better than Google Maps for Android phones, which has accommodated its evolving feature set mainly by piling on menus". Google Maps' location tracking is regarded by some as a threat to users' privacy, with Dylan Tweney of VentureBeat writing in August 2014 that "Google is probably logging your location, step by step, via Google Maps", and linked users to Google's location history map, which "lets you see the path you've traced for any given day that your smartphone has been running Google Maps". Tweney then provided instructions on how to disable location history. The history tracking was also noticed, and recommended disabled, by editors at CNET and TechCrunch. Additionally, Quartz reported in April 2014 that a "sneaky new privacy change" would have an effect on the majority of iOS users. The privacy change, an update to the Gmail iOS app that "now supports sign-in across Google iOS apps, including Maps, Drive, YouTube and Chrome", meant that Google would be able to identify users' actions across its different apps. The Android version of the app surpassed five billion installations in March 2019. By November 2021, the Android app had surpassed 10 billion installations. Go version Google Maps Go, a version of the app designed for lower-end devices, was released in beta in January 2018. By September 2018, the app had over 10 million installations. Artistic and literary uses The German "geo-novel" Senghor on the Rocks (2008) presents its story as a series of spreads showing a Google Maps location on the left and the story's text on the right. Annika Richterich explains that the "satellite pictures in Senghor on the Rocks illustrate the main character's travel through the West-African state of Senegal". Artists have used Google Street View in a range of ways. Emilio Vavarella's The Google Trilogy includes glitchy images and unintended portraits of the drivers of the Street View cars. The Japanese band group inou used Google Street View backgrounds to make a music video for their song EYE. The Canadian band Arcade Fire made a customized music video that used Street View to show the viewer their own childhood home.
Technology
Utility
null
151590
https://en.wikipedia.org/wiki/Faraday%20cage
Faraday cage
A Faraday cage or Faraday shield is an enclosure used to block some electromagnetic fields. A Faraday shield may be formed by a continuous covering of conductive material, or in the case of a Faraday cage, by a mesh of such materials. Faraday cages are named after scientist Michael Faraday, who first constructed one in 1836. Faraday cages work because an external electrical field will cause the electric charges within the cage's conducting material to be distributed in a way that cancels out the field's effect inside the cage. This phenomenon can be used to protect sensitive electronic equipment (for example RF receivers) from external radio frequency interference (RFI) often during testing or alignment of the device. Faraday cages are also used to protect people and equipment against electric currents such as lightning strikes and electrostatic discharges, because the cage conducts electrical current around the outside of the enclosed space and none passes through the interior. Faraday cages cannot block stable or slowly varying magnetic fields, such as the Earth's magnetic field (a compass will still work inside one). To a large degree, however, they shield the interior from external electromagnetic radiation if the conductor is thick enough and any holes are significantly smaller than the wavelength of the radiation. For example, certain computer forensic test procedures of electronic systems that require an environment free of electromagnetic interference can be carried out within a screened room. These rooms are spaces that are completely enclosed by one or more layers of a fine metal mesh or perforated sheet metal. The metal layers are grounded to dissipate any electric currents generated from external or internal electromagnetic fields, and thus they block a large amount of the electromagnetic interference (see also electromagnetic shielding). They provide less attenuation of outgoing transmissions than incoming: they can block electromagnetic pulse (EMP) waves from natural phenomena very effectively, but especially in upper frequencies, a tracking device may be able to penetrate from within the cage (e.g., some cell phones operate at various radio frequencies so while one frequency may not work, another one will). The reception or transmission of radio waves, a form of electromagnetic radiation, to or from an antenna within a Faraday cage is heavily attenuated or blocked by the cage; however, a Faraday cage has varied attenuation depending on wave form, frequency, or the distance from receiver or transmitter, and receiver or transmitter power. Near-field, high-powered frequency transmissions like HF RFID are more likely to penetrate. Solid cages generally attenuate fields over a broader range of frequencies than mesh cages. History In 1754, Jean-Antoine Nollet published an account of the cage effect in his Leçons de physique expérimentale. In 1755, Benjamin Franklin observed the effect by lowering an uncharged cork ball suspended on a silk thread through an opening in an electrically charged metal can. The behavior is that of a Faraday cage or shield. In 1836, Michael Faraday observed that the excess charge on a charged conductor resided only on its exterior and had no influence on anything enclosed within it. To demonstrate this, he built a room coated with metal foil and allowed high-voltage discharges from an electrostatic generator to strike the outside of the room. He used an electroscope to show that there was no electric charge present on the inside of the room walls. Operation Continuous A continuous Faraday shield is a hollow conductor. Externally or internally applied electromagnetic fields produce forces on the charge carriers (usually electrons) within the conductor; the charges are redistributed accordingly due to electrostatic induction. The redistributed charges greatly reduce the voltage within the surface, to an extent depending on the capacitance; however, full cancellation does not occur. Interior charges If charge +Q is placed inside an ungrounded Faraday shield without touching the walls, the internal face of the shield becomes charged with −Q, leading to field lines originating at the charge and extending to charges inside the inner surface of the metal. The field line paths in this inside space (to the endpoint negative charges) are dependent on the shape of the inner containment walls. Simultaneously +Q accumulates on the outer face of the shield. The spread of charges on the outer face is not affected by the position of the internal charge inside the enclosure, but rather determined by the shape of outer face. So for all intents and purposes, the Faraday shield generates the same static electric field on the outside that it would generate if the metal were simply charged with +Q. See Faraday's ice pail experiment, for example, for more details on electric field lines and the decoupling of the outside from the inside. Note that electromagnetic waves are not static charges. If the cage is grounded, the excess charges will be neutralized as the ground connection creates an equipotential bonding between the outside of the cage and the environment, so there is no voltage between them and therefore also no field. The inner face and the inner charge will remain the same so the field is kept inside. Exterior fields Effectiveness of the shielding of a static electric field is largely independent of the geometry of the conductive material; however, the static magnetic fields can penetrate the shield completely. In the case of varying electromagnetic fields, the faster the variations are (i.e., the higher the frequencies), the better the material resists magnetic field penetration. In this case the shielding also depends on the electrical conductivity, the magnetic properties of the conductive materials used in the cages, as well as their thicknesses. A good example of the effectiveness of a Faraday shield can be obtained from considerations of skin depth. With skin depth, the current flowing is mostly in the surface, and decays exponentially with depth through the material. Because a Faraday shield has finite thickness, this determines how well the shield works; a thicker shield can attenuate electromagnetic fields better, and to a lower frequency. Faraday cage Faraday cages are Faraday shields that have holes in them and are therefore more complex to analyze. Whereas continuous shields essentially attenuate all wavelengths whose skin depth in the hull material is less than the thickness of the hull, the holes in a cage may permit shorter wavelengths to pass through or set up "evanescent fields" (oscillating fields that do not propagate as EM waves) just beyond the surface. The shorter the wavelength, the better it passes through a mesh of given size. Thus, to work well at short wavelengths (i.e., high frequencies), the holes in the cage must be smaller than the wavelength of the incident wave. Examples Faraday cages are routinely used in analytical chemistry to reduce noise while making sensitive measurements. Faraday cages, more specifically dual paired seam Faraday bags, are often used in digital forensics to prevent remote wiping and alteration of criminal digital evidence. Faraday bags are portable containers fabricated with metallic materials that are used to contain devices in order to protect them from electromagnetic transmissions for a wide range of applications, from enhancing digital privacy of cell telephones to protecting credit cards from RFID skimming. The U.S. and NATO Tempest standards, and similar standards in other countries, include Faraday cages as part of a broader effort to provide emission security for computers. Automobile and airplane passenger compartments are essentially Faraday cages, protecting passengers from electric charges, such as lightning. Electronic components in automobiles and aircraft use Faraday cages to protect signals from interference. Sensitive components may include wireless door locks, navigation/GPS systems, and lane departure warning systems. Faraday cages and shields are also critical to vehicle infotainment systems (e.g. radio, Wi-Fi, and GPS display units), which may be designed with the capability to function as critical circuits in emergency situations. A booster bag (shopping bag lined with aluminum foil) acts as a Faraday cage. It is often used by shoplifters to steal RFID-tagged items. Similar containers are used to resist RFID skimming. Elevators and other rooms with metallic conducting frames and walls simulate a Faraday cage effect, leading to a loss of signal and "dead zones" for users of cellular phones, radios, and other electronic devices that require external electromagnetic signals. During training, firefighters and other first responders are cautioned that their two-way radios will probably not work inside elevators and to make allowances for that. Small, physical Faraday cages are used by electronics engineers during equipment testing to simulate such an environment to make sure that the device gracefully handles these conditions. Properly designed conductive clothing can also form a protective Faraday cage. Some electrical linemen wear Faraday suits, which allow them to work on live, high-voltage power lines without risk of electrocution. The suit prevents electric current from flowing through the body and it has no theoretical voltage limit. Linemen have successfully worked even the highest voltage (Kazakhstan's Ekibastuz–Kokshetau line 1150 kV) lines safely. The scan room of a magnetic resonance imaging (MRI) machine is designed as a Faraday cage. This prevents external RF (radio frequency) signals from being added to data collected from the patient, which would affect the resulting image. Technologists are trained to identify the characteristic artifacts created on images should the Faraday cage be damaged, such as during a thunderstorm. A microwave oven uses a partial Faraday shield (on five of its interior six sides) and a partial Faraday cage, consisting of a wire mesh, on the sixth side (the transparent window), to contain the electromagnetic energy within the oven and to protect the user from exposure to microwave radiation. Plastic bags that are impregnated with metal are used to enclose electronic toll collection devices whenever tolls should not be charged to those devices, such as during transit or when the user is paying cash. The shield of a screened cable, such as USB cables or the coaxial cable used for cable television, protects the internal conductors from external electrical noise and prevents the RF signals from leaking out. Electronic components in some music instruments, such as in an electric guitar, are protected by Faraday cages made from copper or aluminum foils that protect the instrument's electromagnetic pickups from interference from speakers, amplifiers, stage lights, and other musical equipment. Some buildings, such as prisons, are constructed as a Faraday cage because they have reasons to block both incoming and outgoing cell phone calls by prisoners. The exhibit hall of the Green Bank Observatory is a Faraday cage to prevent interference with the operations of their radio telescopes.
Technology
Signal transmission
null
151648
https://en.wikipedia.org/wiki/Very%20Large%20Telescope
Very Large Telescope
The Very Large Telescope (VLT) is an astronomical facility operated since 1998 by the European Southern Observatory, located on Cerro Paranal in the Atacama Desert of northern Chile. It consists of four individual telescopes, each equipped with a primary mirror that measures 8.2 meters in diameter. These optical telescopes, named Antu, Kueyen, Melipal, and Yepun (all words for astronomical objects in the Mapuche language), are generally used separately but can be combined to achieve a very high angular resolution. The VLT array is also complemented by four movable Auxiliary Telescopes (ATs) with 1.8-meter apertures. The VLT is capable of observing both visible and infrared wavelengths. Each individual telescope can detect objects that are roughly four billion times fainter than what can be seen with the naked eye. When all the telescopes are combined, the facility can achieve an angular resolution of approximately 0.002 arcsecond. In single telescope mode, the angular resolution is about 0.05 arcseconds. The VLT is one of the most productive facilities for astronomy, second only to the Hubble Space Telescope in terms of the number of scientific papers produced from facilities operating at visible wavelengths. Some of the pioneering observations made using the VLT include the first direct image of an exoplanet, the tracking of stars orbiting around the supermassive black hole at the centre of the Milky Way, and observations of the afterglow of the furthest known gamma-ray burst. General information The VLT consists of an arrangement of four large (8.2 metre diameter) telescopes (called Unit Telescopes or UTs) with optical elements that can combine them into an astronomical interferometer (VLTI), which is used to resolve small objects. The interferometer also includes a set of four 1.8 meter diameter movable telescopes dedicated to interferometric observations. The first of the UTs started operating in May 1998 and was offered to the astronomical community on 1 April 1999. The other telescopes became operational in 1999 and 2000, enabling multi-telescope VLT capability. Four 1.8-metre Auxiliary Telescopes (ATs) have been added to the VLTI to make it available when the UTs are being used for other projects. These ATs were installed and became operational between 2004 and 2007. The VLT's 8.2-meter telescopes were originally designed to operate in three modes: as a set of four independent telescopes (this is the primary mode of operation). as a single large coherent interferometric instrument (the VLT Interferometer or VLTI), for extra resolution. This mode is used for observations of relatively bright sources with small angular extent. as a single large incoherent instrument, for extra light-gathering capacity. The instrumentation required to obtain a combined incoherent focus was not originally built. In 2009, new instrumentation proposals were put forward to potentially make that observing mode available. Multiple telescopes are sometimes independently pointed at the same object, either to increase the total light-gathering power or to provide simultaneous observations with complementary instruments. Unit telescopes The UTs are equipped with a large set of instruments permitting observations to be performed from the near-ultraviolet to the mid-infrared (i.e. a large fraction of the light wavelengths accessible from the surface of the Earth), with the full range of techniques including high-resolution spectroscopy, multi-object spectroscopy, imaging, and high-resolution imaging. In particular, the VLT has several adaptive optics systems, which correct for the effects of atmospheric turbulence, providing images almost as sharp as if the telescope were in space. In the near-infrared, the adaptive optics images of the VLT are up to three times sharper than those of the Hubble Space Telescope, and the spectroscopic resolution is many times better than Hubble. The VLTs are noted for their high level of observing efficiency and automation. The primary mirrors of the UTs are 8.2 meters in diameter but, in practice, the pupil of the telescopes is defined by their secondary mirrors, effectively reducing the usable diameter to 8.0 meters at the Nasmyth focus and 8.1 meters at the Cassegrain focus. The 8.2 m-diameter telescopes are housed in compact, thermally controlled buildings, which rotate synchronously with the telescopes. This design minimises any adverse effects on the observing conditions, for instance from air turbulence in the telescope tube, which might otherwise occur due to variations in the temperature and wind flow. The principal role of the main VLT telescopes is to operate as four independent telescopes. The interferometry (combining light from multiple telescopes) is used about 20 percent of the time for very high-resolution on bright objects, for example, on Betelgeuse. This mode allows astronomers to see details up to 25 times finer than with the individual telescopes. The light beams are combined in the VLTI using a complex system of mirrors in tunnels where the light paths must be kept equal within differences of less than 1 μm over a light path of a hundred metres. With this kind of precision, the VLTI can reconstruct images with an angular resolution of milliarcseconds. Mapuche names for the Unit Telescopes It had long been ESO's intention to provide "real" names to the four VLT Unit Telescopes, to replace the original technical designations of UT1 to UT4. In March 1999, at the time of the Paranal inauguration, four meaningful names of objects in the sky in the Mapuche language were chosen. This indigenous people lives mostly south of Santiago de Chile. An essay contest was arranged in this connection among schoolchildren of the Chilean II Region of which Antofagasta is the capital to write about the implications of these names. It drew many entries dealing with the cultural heritage of ESO's host country. The winning essay was submitted by 17-year-old Jorssy Albanez Castilla from Chuquicamata near the city of Calama. She received the prize, an amateur telescope, during the inauguration of the Paranal site. Unit Telescopes 1–4 are since known as Antu (Sun), Kueyen (Moon), Melipal (Southern Cross), and Yepun (Evening Star), respectively. Originally there was some confusion as to whether Yepun actually stands for the evening star Venus, because a Spanish-Mapuche dictionary from the 1940s wrongly translated Yepun as "Sirius". Auxiliary telescopes Although the four 8.2-metre Unit Telescopes can be combined in the VLTI, their observation time is spent mostly on individual observations, and are used for interferometric observations for a limited number of nights every year. However, the four smaller 1.8-metre ATs are available and dedicated to interferometry to allow the VLTI to operate every night. The top part of each AT is a round enclosure, made from two sets of three segments, which open and close. Its job is to protect the delicate 1.8-metre telescope from the desert conditions. The enclosure is supported by the boxy transporter section, which also contains electronics cabinets, liquid cooling systems, air-conditioning units, power supplies, and more. During astronomical observations the enclosure and transporter are mechanically isolated from the telescope, to ensure that no vibrations compromise the data collected. The transporter section runs on tracks, so the ATs can be moved to 30 different observing locations. As the VLTI acts rather like a single telescope as large as the group of telescopes combined, changing the positions of the ATs means that the VLTI can be adjusted according to the needs of the observing project. The reconfigurable nature of the VLTI is similar to that of the Very Large Array. Scientific results Results from the VLT have led to the publication of an average of more than one peer-reviewed scientific paper per day. For instance in 2017, over 600 refereed scientific papers were published based on VLT data. The telescope's scientific discoveries include direct imaging of Beta Pictoris b, the first extrasolar planet so imaged, tracking individual stars moving around the supermassive black hole at the centre of the Milky Way, and observing the afterglow of the furthest known gamma-ray burst. In 2018, the VLT helped to perform the first successful test of Albert Einstein's General Relativity on the motion of a star passing through the extreme gravitational field near the supermassive black hole, that is the gravitational redshift. In fact, the observation has been conducted for over 26 years with the SINFONI and NACO adaptive optics instruments in the VLT while the new approach in 2018 also used the beam-combiner instrument GRAVITY. The Galactic Centre team at the Max Planck Institute for Extraterrestrial Physics (MPE) used these observations to reveal these effects for the first time. Other discoveries with VLT's signature include the detection of carbon monoxide molecules in a galaxy located almost 11 billion light-years away for the first time, a feat that had remained elusive for 25 years. This has allowed astronomers to obtain the most precise measurement of the cosmic temperature at such a remote epoch. Another important study was that of the violent flares from the supermassive black hole at the centre of the Milky Way. The VLT and APEX teamed up to reveal material being stretched out as it orbits in the intense gravity close to the central black hole. Using the VLT, astronomers have also estimated the age of extremely old stars in the NGC 6397 cluster. Based on stellar evolution models, two stars were found to be 13.4 ± 0.8 billion years old, that is, they are from the earliest era of star formation in the Universe. They have also analysed the atmosphere around a super-Earth exoplanet for the first time using the VLT. The planet, which is known as GJ 1214b, was studied as it passed in front of its parent star and some of the starlight passed through the planet's atmosphere. In all, of the top 10 discoveries done at ESO's observatories, seven made use of the VLT. Technical details Telescopes Each Unit Telescope is a Ritchey-Chretien Cassegrain telescope with a 22-tonne 8.2 metre Zerodur primary mirror of 14.4 m focal length, and a 1.1 metre lightweight beryllium secondary mirror. A flat tertiary mirror diverts the light to one of two instruments at the f/15 Nasmyth foci on either side, with a system focal length of 120 m, or the tertiary tilts aside to allow light through the primary mirror central hole to a third instrument at the Cassegrain focus. This allows switching between any of the three instruments within 5 minutes, to match observing conditions. Additional mirrors can send the light via tunnels to the central VLTI beam-combiners. The maximum field-of-view (at Nasmyth foci) is around 27 arcminutes diameter, slightly smaller than the full moon, though most instruments view a narrower field. Each telescope has an alt-azimuth mount with total mass around 350 tonnes, and uses active optics with 150 supports on the back of the primary mirror to control the shape of the thin (177mm thick) mirror by computers. Instruments The VLT instrumentation programme is the most ambitious programme ever conceived for a single observatory. It includes large-field imagers, adaptive optics corrected cameras and spectrographs, as well as high-resolution and multi-object spectrographs and covers a broad spectral region, from deep ultraviolet (300 nm) to mid-infrared (24 μm) wavelengths. In addition to these, GRAVITY and MATISSE are currently installed in the VLTI lab, along with ESPRESSO fed via fibre-optics (not interferometric). AMBER (VLTI) The Astronomical Multi-Beam Recombiner instrument combines three telescopes of the VLT at the same time, dispersing the light in a spectrograph to analyse the composition and shape of the observed object. AMBER is notably the "most-productive interferometric instrument ever". It has been decommissioned. CRIRES and CRIRES+ The Cryogenic Infrared Echelle Spectrograph is an adaptive optics assisted echelle spectrograph. It provides a resolving power of up to 100,000 in the infrared spectral range from 1 to 5 micrometres. From 2014 to 2020 it underwent a major upgrade to CRIRES+ to provide ten times larger simultaneous wavelength coverage. A new detector focal plane array of three Hawaii 2RG detectors with a 5.3 μm cut-off wavelength replaced the existing detectors, a new spectropolarimetric unit is added, and the calibration system is enhanced. One of the scientific objectives of CRIRES+ is in-transit spectroscopy of exoplanets, which currently provides us with the only means of studying exoplanetary atmospheres. Transiting planets are almost always close-in planets that are hot and radiate most of their light in the infrared (IR). Furthermore, the IR is a spectral region where lines of molecular gases like carbon monoxide (CO), ammonia (NH3), and methane (CH4), etc. are expected from the exoplanetary atmosphere. This important wavelength region is covered by CRIRES+, which will additionally allow tracking multiple absorption lines simultaneously. ERIS Enhanced Resolution Imager and Spectrograph is the newest VLT instrument, which started science operation in 2023. It is an adaptive-optics assisted near-infrared imager (with coronagraph option) and integral-field spectrograph. It replaces the former NACO and SINFONI instruments with improved capability. ESPRESSO Echelle Spectrograph for Rocky Exoplanet and Stable Spectroscopic Observations) is a high-resolution, fiber-fed and cross-dispersed echelle spectrograph for the visible wavelength range, capable of operating in 1-UT mode (using one of the four telescopes) and in 4-UT mode (using all four), for the search for rocky extra-solar planets in the habitable zone of their host stars. Its main feature is the spectroscopic stability and the radial-velocity precision. The requirement is to reach 10 cm/s, but the aimed goal is to obtain a precision level of few cm/s. ESPRESSO was installed and commissioned at the VLT in 2017–2018. FLAMES Fibre Large Array Multi-Element Spectrograph is a multi-object fibre feed unit for UVES and GIRAFFE, the latter allowing the capability for simultaneously studying hundreds of individual stars in nearby galaxies at moderate spectral resolution in the visible. FORS1/FORS2 Focal Reducer and Low Dispersion Spectrograph is a visible light camera and Multi Object Spectrograph with a 6.8 arcminute field of view. FORS2 is an upgraded version over FORS1 and includes further multi-object spectroscopy capabilities. FORS1 was retired in 2009 to make space for X-SHOOTER; FORS2 continues to operate as of 2021. GRAVITY (VLTI) GRAVITY is an adaptive optics assisted, near-infrared (NIR) instrument for micro-arcsecond precision narrow-angle astrometry and interferometric phase referenced imaging of faint celestial objects. This instrument interferometrically combines NIR light collected by four telescopes at the VLTI. HAWK-I The High Acuity Wide field K-band Imager is a near-infrared imager with a relatively large field of view, about 8x8 arcminutes. ISAAC The Infrared Spectrometer And Array Camera was a near infrared imager and spectrograph; it operated successfully from 2000 to 2013 and was then retired to make way for SPHERE, since most of its capabilities can now be delivered by the newer HAWK-I or KMOS. KMOS KMOS (K-band Multi Object Spectrograph) is a cryogenic near-infrared multi-object spectrometer, observing 24 objects simultaneously, intended primarily for the study of distant galaxies. MATISSE (VLTI) The Multi Aperture Mid-Infrared Spectroscopic Experiment is an infrared spectro-interferometer of the VLT-Interferometer, which potentially combines the beams of all four Unit Telescopes (UTs) and four Auxiliary Telescopes (ATs). The instrument is used for image reconstruction. After 12 years of development It saw its first light at the telescope in Paranal in March 2018. MIDI (VLTI) MIDI is an instrument combining two telescopes of the VLT in the mid-infrared, dispersing the light in a spectrograph to analyse the dust composition and shape of the observed object. MIDI is notably the second most-productive interferometric instrument ever (surpassed by AMBER recently). MIDI was retired in March 2015 to prepare the VLTI for the arrival of GRAVITY and MATISSE. MUSE MUSE is a huge "3-dimensional" spectroscopic explorer which will provide complete visible spectra of all objects contained in "pencil beams" through the Universe. NACO NAOS-CONICA, NAOS meaning Nasmyth Adaptive Optics System and CONICA, meaning Coude Near Infrared Camera) is an adaptive optics facility which produces infrared images as sharp as if taken in space and includes spectroscopic, polarimetric and coronagraphic capabilities. PIONIER (VLTI) is an instrument to combine the light of all 8-metre telescopes, allowing to pick up details about 16 times finer than can be seen with one UT. SINFONI the Spectrograph for Integral Field Observations in the Near Infrared) was a medium resolution, near-infrared (1–2.5 micrometres) integral field spectrograph fed by an adaptive optics module. It operated from 2003, then retired in June 2019 to make space for the future ERIS. SPHERE The Spectro-Polarimetric High-Contrast Exoplanet Research, a high-contrast adaptive optics system dedicated to the discovery and study of exoplanets. ULTRACAM ULTRACAM is a visitor instrument for ultra-high-speed photometry of variable objects. ULTRACAM provides three simultaneous bands of optical photometry. UVES The Ultraviolet and Visual Echelle Spectrograph is a high-resolution ultraviolet and visible light echelle spectrograph. VIMOS The Visible Multi-Object Spectrograph delivered visible images and spectra of up to 1,000 galaxies at a time in a 14 × 14 arcmin field of view. It was mainly used for several large redshift surveys of distant galaxies, including VVDS, zCOSMOS and VIPERS. It was retired in 2018 to make space for the return of CRIRES+. VINCI (VLTI) VINCI was a test instrument combining two telescopes of the VLT. It was the first-light instrument of the VLTI and is no longer in use. VISIR The VLT spectrometer and imager for the mid-infrared provides diffraction-limited imaging and spectroscopy at a range of resolutions in the 10 and 20 micrometre mid-infrared (MIR) atmospheric windows. VISIR hosts the NEAR Science Demonstration, where NEAR is New Earths in the Alpha Centauri Region. X-Shooter X-Shooter is the first second-generation instrument, operating since 2009. It is a very wide-band [UV to near infrared] single-object spectrometer designed to explore the properties of rare, unusual or unidentified sources. Interferometry In its interferometric operating mode, the light from the telescopes is reflected off mirrors and directed through tunnels to a central beam combining laboratory. In the year 2001, during commissioning, the VLTI successfully measured the angular diameters of four red dwarfs including Proxima Centauri. During this operation it achieved an angular resolution of ±0.08 milli-arc-seconds (0.388 nano-radians). This is comparable to the resolution achieved using other arrays such as the Navy Prototype Optical Interferometer and the CHARA array. Unlike many earlier optical and infrared interferometers, the Astronomical Multi-Beam Recombiner (AMBER) instrument on VLTI was initially designed to perform coherent integration (which requires signal-to-noise greater than one in each atmospheric coherence time). Using the big telescopes and coherent integration, the faintest object the VLTI can observe is magnitude 7 in the near infrared for broadband observations, similar to many other near infrared / optical interferometers without fringe tracking. In 2011, an incoherent integration mode was introduced called AMBER "blind mode", which is more similar to the observation mode used at earlier interferometer arrays such as COAST, IOTA and CHARA. In this "blind mode", AMBER can observe sources as faint as K=10 in medium spectral resolution. At more challenging mid-infrared wavelengths, the VLTI can reach magnitude 4.5, significantly fainter than the Infrared Spatial Interferometer. When fringe tracking is introduced, the limiting magnitude of the VLTI is expected to improve by a factor of almost 1000, reaching a magnitude of about 14. This is similar to what is expected for other fringe tracking interferometers. In spectroscopic mode, the VLTI can currently reach a magnitude of 1.5. The VLTI can work in a fully integrated way, so that interferometric observations are actually quite simple to prepare and execute. The VLTI has become worldwide the first general user optical/infrared interferometric facility offered with this kind of service to the astronomical community. Because of the many mirrors involved in the optical train, about 95% of the light is lost before reaching the instruments at a wavelength of 1 μm, 90% at 2 μm and 75% at 10 μm. This refers to reflection off 32 surfaces including the Coudé train, the star separator, the main delay line, beam compressor and feeding optics. Additionally, the interferometric technique is such that it is very efficient only for objects that are small enough that all their light is concentrated. For instance, an object with a relatively low surface brightness such as the moon cannot be observed, because its light is too diluted. Only targets which are at temperatures of more than 1,000°C have a surface brightness high enough to be observed in the mid-infrared, and objects must be at several thousands of degrees Celsius for near-infrared observations using the VLTI. This includes most of the stars in the solar neighborhood and many extragalactic objects such as bright active galactic nuclei, but this sensitivity limit rules out interferometric observations of most solar-system objects. Although the use of large telescope diameters and adaptive optics correction can improve the sensitivity, this cannot extend the reach of optical interferometry beyond nearby stars and the brightest active galactic nuclei. Because the Unit Telescopes are used most of the time independently, they are used in the interferometric mode mostly during bright time (that is, close to full moon). At other times, interferometry is done using 1.8 meter Auxiliary Telescopes (ATs), which are dedicated to full-time interferometric measurements. The first observations using a pair of ATs were conducted in February 2005, and all the four ATs have now been commissioned. For interferometric observations on the brightest objects, there is little benefit in using 8 meter telescopes rather than 1.8 meter telescopes. The first two instruments at the VLTI were VINCI (a test instrument used to set up the system, now decommissioned) and MIDI, which only allow two telescopes to be used at any one time. With the installation of the three-telescope AMBER closure-phase instrument in 2005, the first imaging observations from the VLTI are expected soon. Deployment of the Phase Referenced Imaging and Microarcsecond Astrometry (PRIMA) instrument started 2008 with the aim to allow phase-referenced measurements in either an astrometric two-beam mode or as a fringe-tracker successor to VINCI, operated concurrent with one of the other instruments. After falling drastically behind schedule and failing to meet some specifications, in December 2004 the VLT Interferometer became the target of a second ESO "recovery plan". This involves additional effort concentrated on improvements to fringe tracking and the performance of the main delay lines. Note that this only applies to the interferometer and not other instruments on Paranal. In 2005, the VLTI was routinely producing observations, although with a brighter limiting magnitude and poorer observing efficiency than expected. , the VLTI had already led to the publication of 89 peer-reviewed publications and had published a first-ever image of the inner structure of the mysterious Eta Carinae. In March 2011, the PIONIER instrument for the first time simultaneously combined the light of the four Unit Telescopes, potentially making VLTI the biggest optical telescope in the world. However, this attempt was not really a success. The first successful attempt was in February 2012, with four telescopes combined into a 130-meter diameter mirror. In March 2019, ESO astronomers, employing the GRAVITY instrument on their Very Large Telescope Interferometer (VLTI), announced the first direct detection of an exoplanet, HR 8799 e, using optical interferometry. In popular culture One of the large mirrors of the telescopes was the subject of an episode of the National Geographic Channel's reality series World's Toughest Fixes, where a crew of engineers removed and transported the mirror to be cleaned and re-coated with aluminium. The job required battling strong winds, fixing a broken pump in a giant washing machine and resolving a rigging issue. The procedure is part of routine scheduled maintenance. The area surrounding the Very Large Telescope was featured in the 2008 film Quantum of Solace. The ESO Hotel, the Residencia, served as a backdrop for part of the James Bond movie. Producer Michael G. Wilson said: "The Residencia of Paranal Observatory caught the attention of our director, Marc Forster and production designer, Dennis Gassner, both for its exceptional design and its remote location in the Atacama desert. It is a true oasis and the perfect hide out for Dominic Greene, our villain, whom 007 is tracking in our new James Bond film."
Technology
Ground-based observatories
null
151651
https://en.wikipedia.org/wiki/Fomalhaut
Fomalhaut
Fomalhaut (, ) is the brightest star in the southern constellation of Piscis Austrinus, the Southern Fish, and one of the brightest stars in the night sky. It has the Bayer designation Alpha Piscis Austrini, which is an alternative form of α Piscis Austrini, and is abbreviated Alpha PsA or α PsA. This is a class A star on the main sequence approximately from the Sun as measured by the Hipparcos astrometry satellite. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It is classified as a Vega-like star that emits excess infrared radiation, indicating it is surrounded by a circumstellar disk. Fomalhaut, K-type main-sequence star TW Piscis Austrini, and M-type, red dwarf star LP 876-10 constitute a triple system, even though the companions are separated by approximately 8 degrees. Fomalhaut was the first stellar system with an extrasolar planet candidate imaged at visible wavelengths, designated Fomalhaut b. However, analyses in 2019 and 2023 of existing and new observations indicate that Fomalhaut b is not a planet, but rather an expanding region of debris from a massive planetesimal collision. Nomenclature α Piscis Austrini, or Alpha Piscis Austrini, is the system's Bayer designation. It also bears the Flamsteed designation of 24 Piscis Austrini. The classical astronomer Ptolemy included it in the constellation of Aquarius, along with the rest of Piscis Austrinus. In the 17th century, Johann Bayer firmly planted it in the primary position of Piscis Austrinus. Following Ptolemy, John Flamsteed in 1725 additionally denoted it 79 Aquarii. The current designation reflects modern consensus on Bayer's decision, that the star belongs in Piscis Austrinus. Under the rules for naming objects in multiple-star systems, the three components – Fomalhaut, TW Piscis Austrini and LP 876-10 – are designated A, B and C, respectively. The star's traditional name derives from Fom al-Haut from scientific Arabic "the mouth of the [Southern] Fish" (literally, "mouth of the whale"), a translation of how Ptolemy labeled it. Fam in Arabic means "mouth", al "the", and ḥūt "fish" or "whale". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included the name "Fomalhaut" for this star. In July 2014, the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets. The process involved public nomination and voting for the new names. In December 2015, the IAU announced "Dagon" as the winning name for Fomalhaut b. The winning name was proposed by Todd Vaccaro and forwarded by the St. Cloud State University Planetarium of St. Cloud, Minnesota, United States of America, to the IAU for consideration. Dagon was a Semitic deity, often represented as half-man, half-fish. Fomalhaut A At a declination of −29.6°, Fomalhaut is located south of the celestial equator, and hence is best viewed from the Southern Hemisphere. However, its southerly declination is not as great as that of stars such as Acrux, Alpha Centauri and Canopus, meaning that, unlike them, Fomalhaut is visible from a large part of the Northern Hemisphere as well, being best seen in autumn. Its declination is greater than that of Sirius and similar to that of Antares. At 40°N, Fomalhaut rises above the horizon for eight hours and reaches only 20° above the horizon, while Capella, which rises at approximately the same time, will stay above the horizon for twenty hours. Fomalhaut can be located in northern latitudes by the fact that the western (right-hand) side of the Square of Pegasus points to it. Continuing the line from Beta to Alpha Pegasi towards the southern horizon, Fomalhaut is about 45˚ south of Alpha Pegasi, with no bright stars in between. Properties Fomalhaut is a young star, for many years thought to be only 100 to 300 million years old, with a potential lifespan of a billion years. A 2012 study gave a slightly higher age of . The surface temperature of the star is around . Fomalhaut's mass is about 1.92 times that of the Sun, its luminosity is about 16.6 times greater, and its diameter is roughly 1.84 times as large. Fomalhaut is slightly metal-deficient compared to the Sun, which means it is composed of a smaller percentage of elements other than hydrogen and helium. The metallicity is typically determined by measuring the abundance of iron in the photosphere relative to the abundance of hydrogen. A 1997 spectroscopic study measured a value equal to 93% of the Sun's abundance of iron. A second 1997 study deduced a value of 78%, by assuming Fomalhaut has the same metallicity as the neighboring star TW Piscis Austrini, which has since been argued to be a physical companion. In 2004, a stellar evolutionary model of Fomalhaut yielded a metallicity of 79%. Finally, in 2008, a spectroscopic measurement gave a significantly lower value of 46%. Fomalhaut has been claimed to be one of approximately 16 stars belonging to the Castor Moving Group. This is an association of stars which share a common motion through space, and have been claimed to be physically associated. Other members of this group include Castor and Vega. The moving group has an estimated age of and originated from the same location. More recent work has found that purported members of the Castor Moving Group appear to not only have a wide range of ages, but their velocities are too different to have been possibly associated with one another in the distant past. Hence, "membership" in this dynamical group has no bearing on the age of the Fomalhaut system. Debris disks and suspected planets Fomalhaut is surrounded by several debris disks. The inner disk is a high-carbon small-grain (10–300 nm) ash disk, clustering at 0.1 AU from the star. Next is a disk of larger particles, with inner edge 0.4-1 AU of the star. The innermost disk is unexplained as yet. The outermost disk is at a radial distance of , in a toroidal shape with a very sharp inner edge, all inclined 24 degrees from edge-on. The dust is distributed in a belt about 25 AU wide. The geometric center of the disk is offset by about from Fomalhaut. The disk is sometimes referred to as "Fomalhaut's Kuiper belt". Fomalhaut's dusty disk is believed to be protoplanetary, and emits considerable infrared radiation. Measurements of Fomalhaut's rotation indicate that the disk is located in the star's equatorial plane, as expected from theories of star and planet formation. Herschel Space Observatory images of Fomalhaut, analysed in 2012, reveal that a large amount of fluffy micrometer-sized dust is present in the outer dust belt. Because such dust is expected to be blown out of the system by stellar radiation pressure on short timescales, its presence indicates a constant replenishment by collisions of planetesimals. The fluffy morphology of the grains suggests a cometary origin. The collision rate is estimated to be approximately 2000 kilometre-sized comets per day. Observations of this outer dust ring by the Atacama Large Millimeter Array also suggested the possible existence of two planets in the system. If there are additional planets from 4 to 10 AU, they must be under ; if from 2.5 outward, then . On November 13, 2008, astronomers announced an extrasolar planet candidate, orbiting just inside the outer debris ring. This was the first extrasolar orbiting object candidate to be directly imaged in visible light, captured by the Hubble Space Telescope. The mass of the tentative planet, Fomalhaut b, was estimated to be less than three times the mass of Jupiter, and at least the mass of Neptune. However, M-band images taken from the MMT Observatory put strong limits on the existence of gas giants within 40 AU of the star, and Spitzer Space Telescope imaging suggested that the object Fomalhaut b was more likely to be a dust cloud. A later 2019 synthesis of new and existing direct observations of the object confirmed that it is expanding, losing brightness, has not enough mass to detectably perturb the outer ring while crossing it, and is probably a dispersing cloud of debris from a massive planetesimal collision on a hyperbolic orbit destined to leave the Fomalhaut A system. Further 2022 observations with the James Webb Space Telescope in mid-infrared failed to resolve the object in the MIRI wideband filter wavelength range, reported by the same team to be consistent with the previous result. The same 2022 JWST imaging data discovered another apparent feature in the outer disk, dubbed the "Great Dust Cloud". However, another team's analysis, which included other existing data, preferred its interpretation as a coincident background object, not part of the outer ring. Another 2023 study detected 10 point sources around Fomalhaut; all but one of these are background objects, including the "Great Dust Cloud", but the nature of the last is unclear. It may be a background object, or a planetary companion to Fomalhaut. |- | Outer hot disk | colspan="4"| 0.21–0.62 AU or 0.88–1.08 AU | — | — Fomalhaut B (TW Piscis Austrini) Fomalhaut forms a binary star with the K4-type star TW Piscis Austrini (TW PsA), which lies away from Fomalhaut, and its space velocity agrees with that of Fomalhaut within , consistent with being a bound companion. A recent age estimate for TW PsA () agrees very well with the isochronal age for Fomalhaut (), further arguing for the two stars forming a physical binary. The designation TW Piscis Austrini is astronomical nomenclature for a variable star. Fomalhaut B is a flare star of the type known as a BY Draconis variable. It varies slightly in apparent magnitude, ranging from 6.44 to 6.49 over a 10.3 day period. While smaller than the Sun, it is relatively large for a flare star. Most flare stars are red M-type dwarfs. In 2019, a team of researchers analyzing the astrometry, radial velocity measurements, and images of Fomalhaut B suggested the existence of a planet orbiting the star with a mass of Jupiter masses, and a poorly defined orbital period with an estimate loosely centering around 25 years. Fomalhaut C (LP 876-10) LP 876-10 is also associated with the Fomalhaut system, making it a trinary star. In October 2013, Eric Mamajek and collaborators from the RECONS consortium announced that the previously known high-proper-motion star LP 876-10 had a distance, velocity, and color-magnitude position consistent with being another member of the Fomalhaut system. LP 876-10 was originally catalogued as a high-proper-motion star by Willem Luyten in his 1979 NLTT catalogue; however, a precise trigonometric parallax and radial velocity was only measured quite recently. LP 876-10 is a red dwarf of spectral type M4V, and located even farther from Fomalhaut A than TW PsA—about 5.7° away from Fomalhaut A in the sky, in the neighbouring constellation Aquarius, whereas both Fomalhaut A and TW PsA are located in constellation Piscis Austrinus. Its current separation from Fomalhaut A is about , and it is currently located away from TW PsA (Fomalhaut B). LP 876-10 is located well within the tidal radius of the Fomalhaut system, which is . Although LP 876-10 is itself catalogued as a binary star in the Washington Double Star Catalog (called "WSI 138"), there was no sign of a close-in stellar companion in the imaging, spectral, or astrometric data in the Mamajek et al. study. In December 2013, Kennedy et al. reported the discovery of a cold dusty debris disk associated with Fomalhaut C, using infrared images from the Herschel Space Observatory. Multiple-star systems hosting multiple debris disks are exceedingly rare. Etymology and cultural significance Fomalhaut has had various names ascribed to it through time, and has been recognized by many cultures of the northern hemisphere, including the Arabs, Persians, and Chinese. It marked the solstice in 2500 BC. It was also a marker for the worship of Demeter in Eleusis. It is considered to be one of the four "royal stars" of the Persians. The Latin names are "the mouth of the Southern Fish". A folk name among the early Arabs was Difdi' al Awwal ( ) "the first frog" (the second frog is Beta Ceti). The Chinese name (Mandarin: Běiluòshīmén), meaning North Gate of the Military Camp, because this star is marking itself and stands alone in North Gate of the Military Camp asterism, Encampment mansion (see: Chinese constellations). (Běiluòshīmén), westernized into Pi Lo Sze Mun by R.H. Allen. To the Moporr Aboriginal people of South Australia, it is a male being called Buunjill. The Wardaman people of the Northern Territory called Fomalhaut Menggen —white cockatoo. Fomalhaut-Earthwork B, in Mounds State Park near Anderson, Indiana, lines up with the rising of the star Fomalhaut in the fall months, according to the Indiana Department of Natural Resources. In 1980, astronomer Jack Robinson proposed that the rising azimuth of Fomalhaut was marked by cairn placements at both the Bighorn medicine wheel in Wyoming, USA, and the Moose Mountain medicine wheel in Saskatchewan, Canada. New Scientist magazine termed it the "Great Eye of Sauron", comparing its shape and debris ring to the aforementioned "eye" in the Peter Jackson Lord of the Rings films. USS Fomalhaut (AK-22) was a United States navy amphibious cargo ship.
Physical sciences
Notable stars
Astronomy
151783
https://en.wikipedia.org/wiki/Stirling%27s%20approximation
Stirling's approximation
In mathematics, Stirling's approximation (or Stirling's formula) is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: where the big O notation means that, for all sufficiently large values of , the difference between and will be at most proportional to the logarithm of . In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to instead use the binary logarithm, giving the equivalent form The error term in either base can be expressed more precisely as , corresponding to an approximate formula for the factorial itself, Here the sign means that the two quantities are asymptotic, that is, their ratio tends to 1 as tends to infinity. Derivation Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum with an integral: The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating , one considers its natural logarithm, as this is a slowly varying function: The right-hand side of this equation minus is the approximation by the trapezoid rule of the integral and the error in this approximation is given by the Euler–Maclaurin formula: where is a Bernoulli number, and is the remainder term in the Euler–Maclaurin formula. Take limits to find that Denote this limit as . Because the remainder in the Euler–Maclaurin formula satisfies where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form: Taking the exponential of both sides and choosing any positive integer , one obtains a formula involving an unknown quantity . For , the formula is The quantity can be found by taking the limit on both sides as tends to infinity and using Wallis' product, which shows that . Therefore, one obtains Stirling's formula: Alternative derivations An alternative formula for using the gamma function is (as can be seen by repeated integration by parts). Rewriting and changing variables , one obtains Applying Laplace's method one has which recovers Stirling's formula: Higher orders In fact, further corrections can also be obtained using Laplace's method. From previous result, we know that , so we "peel off" this dominant term, then perform two changes of variables, to obtain:To verify this: . Now the function is unimodal, with maximum value zero. Locally around zero, it looks like , which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by . This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives us . Now plug back to the equation to obtainnotice how we don't need to actually find , since it is cancelled out by the integral. Higher orders can be achieved by computing more terms in , which can be obtained programmatically. Thus we get Stirling's formula to two orders: Complex-analytic version A complex-analysis version of this method is to consider as a Taylor coefficient of the exponential function , computed by Cauchy's integral formula as This line integral can then be approximated using the saddle-point method with an appropriate choice of contour radius . The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. Using the Central Limit Theorem and the Poisson distribution An alternative version uses the fact that the Poisson distribution converges to a normal distribution by the Central Limit Theorem. Since the Poisson distribution with parameter converges to a normal distribution with mean and variance , their density functions will be approximately the same: Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to: Taking logs then results in: which can easily be rearranged to give: Evaluating at gives the usual, more precise form of Stirling's approximation. Speed of convergence and error estimates Stirling's formula is in fact the first approximation to the following series (now called the Stirling series): An explicit formula for the coefficients in this series was given by G. Nemes. Further terms are listed in the On-Line Encyclopedia of Integer Sequences as and . The first graph in this section shows the relative error vs. , for 1 through all 5 terms listed above. (Bender and Orszag p. 218) gives the asymptotic formula for the coefficients:which shows that it grows superexponentially, and that by the ratio test the radius of convergence is zero. As , the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion. It is not a convergent series; for any particular value of there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let be the Stirling series to terms evaluated at . The graphs show which, when small, is essentially the relative error. Writing Stirling's series in the form it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term. Other bounds, due to Robbins, valid for all positive integers are This upper bound corresponds to stopping the above series for after the term. The lower bound is weaker than that obtained by stopping the series after the term. A looser version of this bound is that for all . Stirling's formula for the gamma function For all positive integers, where denotes the gamma function. However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If , then Repeated integration by parts gives where is the th Bernoulli number (note that the limit of the sum as is not convergent, so this formula is just an asymptotic expansion). The formula is valid for large enough in absolute value, when , where is positive, with an error term of . The corresponding approximation may now be written: where the expansion is identical to that of Stirling's series above for , except that is replaced with . A further application of this asymptotic expansion is for complex argument with constant . See for example the Stirling formula applied in of the Riemann–Siegel theta function on the straight line . Error bounds For any positive integer , the following notation is introduced: and Then For further information and other error bounds, see the cited papers. A convergent version of Stirling's formula Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series. Obtaining a convergent version of Stirling's formula entails evaluating Binet's formula: One way to do this is by means of a convergent series of inverted rising factorials. If then where where denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series which converges when . Stirling's formula may also be given in convergent form as where Versions suitable for calculators The approximation and its equivalent form can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory. Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler: or equivalently, An alternative approximation for the gamma function stated by Srinivasa Ramanujan in Ramanujan's lost notebook is for . The equivalent approximation for has an asymptotic error of and is given by The approximation may be made precise by giving paired upper and lower bounds; one such inequality is History The formula was first discovered by Abraham de Moivre in the form De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely .
Mathematics
Specific functions
null
151828
https://en.wikipedia.org/wiki/Side%20effect
Side effect
In medicine, a side effect is an effect of the use of a medicinal drug or other treatment, usually adverse but sometimes beneficial, that is unintended. Herbal and traditional medicines also have side effects. A drug or procedure usually used for a specific effect may be used specifically because of a beneficial side-effect; this is termed "off-label use" until such use is approved. For instance, X-rays have long been used as an imaging technique; the discovery of their oncolytic capability led to their use in radiotherapy for ablation of malignant tumours. Frequency of side effects The World Health Organization and other health organisations characterise the probability of experiencing side effects as: Very common, ≥ 1⁄10 Common (frequent), 1⁄10 to 1⁄100 Uncommon (infrequent), 1⁄100 to 1⁄1000 Rare, 1⁄1000 to 1⁄10000 Very rare, < 1⁄10000 The European Commission recommends that the list should contain only effects where there is "at least a reasonable possibility" that they are caused by the drug and the frequency "should represent crude incidence rates (and not differences or relative risks calculated against placebo or other comparator)". The frequency describes how often symptoms appear after taking the drug, without assuming that they were necessarily caused by the drug. Both healthcare providers and lay people misinterpret the frequency of side effects as describing the increase in frequency caused by the drug. Examples of therapeutic side effects Most drugs and procedures have a multitude of reported adverse side effects; the information leaflets provided with virtually all drugs list possible side effects. Beneficial side effects are less common; some examples, in many cases of side-effects that ultimately gained regulatory approval as intended effects, are: Bevacizumab (Avastin), used to slow the growth of blood vessels, has been used against dry age-related macular degeneration, as well as macular edema from diseases such as diabetic retinopathy and central retinal vein occlusion. Buprenorphine has been shown experimentally (1982–1995) to be effective against severe, refractory depression. Bupropion (Wellbutrin), an anti-depressant, also helps smoking cessation; this indication was later approved, and the name of the drug as sold for smoking cessation is Zyban. Bupropion branded as Zyban may be sold at a higher price than as Wellbutrin, so some physicians prescribe Wellbutrin for smoking cessation. Carbamazepine is an approved treatment for bipolar disorder and epileptic seizures, but it has side effects useful in treating attention-deficit hyperactivity disorder (ADHD), schizophrenia, phantom limb syndrome, paroxysmal extreme pain disorder, neuromyotonia, and post-traumatic stress disorder. Dexamethasone and betamethasone in premature labor, to enhance pulmonary maturation of the fetus. Doxepin has been used to treat angioedema and severe allergic reactions due to its strong antihistamine properties. Gabapentin, approved for treatment of seizures and postherpetic neuralgia in adults, has side effects which are useful in treating bipolar disorder, essential tremor, hot flashes, migraine prophylaxis, neuropathic pain syndromes, phantom limb syndrome, and restless leg syndrome. Hydroxyzine, an antihistamine, is also used as an anxiolytic. Magnesium sulfate in obstetrics for premature labor and preeclampsia. Methotrexate (MTX), approved for the treatment of choriocarcinoma, is frequently used for the medical treatment of an unruptured ectopic pregnancy. The SSRI medication sertraline is approved as an antidepressant but delays sexual climax in men, and can be used to treat premature ejaculation. Sildenafil was originally intended for pulmonary hypertension; subsequently, it was discovered that it also produces erections, for which it was later approved. Terazosin, an α1-adrenergic antagonist approved to treat benign prostatic hyperplasia (enlarged prostate) and hypertension, is (one of several drugs) used off-label to treat drug induced diaphoresis and hyperhidrosis (excessive sweating). Thalidomide, a drug sold over the counter from 1957 to 1961 as a tranquiliser and treatment for morning sickness of pregnancy, became notorious for causing tens of thousands of babies to be born without limbs and with other conditions, or stillborn. The drug, though still subject to other adverse side-effects, is now used to treat cancers and skin disorders, and is on the World Health Organization's List of Essential Medicines.
Biology and health sciences
General concepts_2
Health
151864
https://en.wikipedia.org/wiki/Divergence%20theorem
Divergence theorem
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region". The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem. Explanation using liquid flow Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface. Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero. However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink. If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem. The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary. Mathematical statement Suppose is a subset of (in the case of represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary (also indicated with ). If is a continuously differentiable vector field defined on a neighborhood of , then: The left side is a volume integral over the volume , and the right side is the surface integral over the boundary of the volume . The closed, measurable set is oriented by outward-pointing normals, and is the outward pointing unit normal at almost each point on the boundary . ( may be used as a shorthand for .) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume , and the right-hand side represents the total flow across the boundary . Informal derivation The divergence theorem follows from the fact that if a volume is partitioned into separate parts, the flux out of the original volume is equal to the algebraic sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed. See the diagram. A closed, bounded volume is divided into two volumes and by a surface (green). The flux out of each component region is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is where and are the flux out of surfaces and , is the flux through out of volume 1, and is the flux through out of volume 2. The point is that surface is part of the surface of both volumes. The "outward" direction of the normal vector is opposite for each volume, so the flux out of one through is equal to the negative of the flux out of the other so these two fluxes cancel in the sum. Therefore: Since the union of surfaces and is This principle applies to a volume divided into any number of parts, as shown in the diagram. Since the integral over each internal partition (green surfaces) appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces (grey). Since the external surfaces of all the component volumes equal the original surface. The flux out of each volume is the surface integral of the vector field over the surface The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface area approaches zero. However, from the definition of divergence, the ratio of flux to volume, , the part in parentheses below, does not in general vanish but approaches the divergence as the volume approaches zero. As long as the vector field has continuous derivatives, the sum above holds even in the limit when the volume is divided into infinitely small increments As approaches zero volume, it becomes the infinitesimal , the part in parentheses becomes the divergence, and the sum becomes a volume integral over Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used. Proofs For bounded open subsets of Euclidean space We are going to prove the following: Proof of Theorem. For compact Riemannian manifolds with boundary We are going to prove the following: Proof of Theorem. We use the Einstein summation convention. By using a partition of unity, we may assume that and have compact support in a coordinate patch . First consider the case where the patch is disjoint from . Then is identified with an open subset of and integration by parts produces no boundary terms: In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define as the formal adjoint of . Now suppose intersects . Then is identified with an open set in . We zero extend and to and perform integration by parts to obtain where . By a variant of the straightening theorem for vector fields, we may choose so that is the inward unit normal at . In this case is the volume element on and the above formula reads This completes the proof. Corollaries By replacing in the divergence theorem with specific forms, other useful identities can be derived (cf. vector identities). With for a scalar function and a vector field , A special case of this is , in which case the theorem is the basis for Green's identities. With for two vector fields and , where denotes a cross product, With for two vector fields and , where denotes a dot product, With for a scalar function and vector field c: The last term on the right vanishes for constant or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking to be constant: With for vector field and constant vector c: By reordering the triple product on the right hand side and taking out the constant vector of the integral, Hence, Example Suppose we wish to evaluate where is the unit sphere defined by and is the vector field The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to: where is the unit ball: Since the function is positive in one hemisphere of and negative in the other, in an equal and opposite way, its total integral over is zero. The same is true for : Therefore, because the unit ball has volume . Applications Differential and integral forms of physical laws As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity. Continuity equations Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux). Inverse-square laws Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details. History Joseph-Louis Lagrange introduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of his Mécanique Analytique. Lagrange employed surface integrals in his work on fluid mechanics. He discovered the divergence theorem in 1762. Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. He proved additional special cases in 1833 and 1839. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies. Worked examples Example 1 To verify the planar variant of the divergence theorem for a region : and the vector field: The boundary of is the unit circle, , that can be represented parametrically by: such that where units is the length arc from the point to the point on . Then a vector equation of is At a point on : Therefore, Because , we can evaluate and because . Thus Example 2 Let's say we wanted to evaluate the flux of the following vector field defined by bounded by the following inequalities: By the divergence theorem, We now need to determine the divergence of . If is a three-dimensional vector field, then the divergence of is given by . Thus, we can set up the following flux integral as follows: Now that we have set up the integral, we can evaluate it. Generalizations Multiple dimensions One can use the generalised Stokes' theorem to equate the -dimensional volume integral of the divergence of a vector field over a region to the -dimensional surface integral of over the boundary of : This equation is also known as the divergence theorem. When , this is equivalent to Green's theorem. When , it reduces to the fundamental theorem of calculus, part 2. Tensor fields Writing the theorem in Einstein notation: suggestively, replacing the vector field with a rank- tensor field , this can be generalized to: where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity).
Mathematics
Multivariable and vector calculus
null
151925
https://en.wikipedia.org/wiki/Del
Del
Del, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations); the divergence of a vector field; or the curl (rotation) of a vector field. Del is a very convenient mathematical notation for those three operations (gradient, divergence, and curl) that makes many equations easier to write and remember. The del symbol (or nabla) can be formally defined as a vector operator whose components are the corresponding partial derivative operators. As a vector operator, it can act on scalar and vector fields in three different ways, giving rise to three different differential operations: first, it can act on scalar fields by a formal scalar multiplication—to give a vector field called the gradient; second, it can act on vector fields by a formal dot product—to give a scalar field called the divergence; and lastly, it can act on vector fields by a formal cross product—to give a vector field called the curl. These formal products do not necessarily commute with other operators or products. These three uses, detailed below, are summarized as: Gradient: Divergence: Curl: Definition In the Cartesian coordinate system with coordinates and standard basis , del is a vector operator whose components are the partial derivative operators ; that is, Where the expression in parentheses is a row vector. In three-dimensional Cartesian coordinate system with coordinates and standard basis or unit vectors of axes , del is written as As a vector operator, del naturally acts on scalar fields via scalar multiplication, and naturally acts on vector fields via dot products and cross products. More specifically, for any scalar field and any vector field , if one defines then using the above definition of , one may write and and Example: Del can also be expressed in other coordinate systems, see for example del in cylindrical and spherical coordinates. Notational uses Del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and Laplacian. Gradient The vector derivative of a scalar field is called the gradient, and it can be represented as: It always points in the direction of greatest increase of , and it has a magnitude equal to the maximum rate of increase at the point—just like a standard derivative. In particular, if a hill is defined as a height function over a plane , the gradient at a given location will be a vector in the xy-plane (visualizable as an arrow on a map) pointing along the steepest direction. The magnitude of the gradient is the value of this steepest slope. In particular, this notation is powerful because the gradient product rule looks very similar to the 1d-derivative case: However, the rules for dot products do not turn out to be simple, as illustrated by: Divergence The divergence of a vector field is a scalar field that can be represented as: The divergence is roughly a measure of a vector field's increase in the direction it points; but more accurately, it is a measure of that field's tendency to converge toward or diverge from a point. The power of the del notation is shown by the following product rule: The formula for the vector product is slightly less intuitive, because this product is not commutative: Curl The curl of a vector field is a vector function that can be represented as: The curl at a point is proportional to the on-axis torque that a tiny pinwheel would be subjected to if it were centered at that point. The vector product operation can be visualized as a pseudo-determinant: Again the power of the notation is shown by the product rule: The rule for the vector product does not turn out to be simple: Directional derivative The directional derivative of a scalar field in the direction is defined as: Which is equal to the following when the gradient exists This gives the rate of change of a field in the direction of , scaled by the magnitude of . In operator notation, the element in parentheses can be considered a single coherent unit; fluid dynamics uses this convention extensively, terming it the convective derivative—the "moving" derivative of the fluid. Note that is an operator that takes scalar to a scalar. It can be extended to operate on a vector, by separately operating on each of its components. Laplacian The Laplace operator is a scalar operator that can be applied to either vector or scalar fields; for cartesian coordinate systems it is defined as: and the definition for more general coordinate systems is given in vector Laplacian. The Laplacian is ubiquitous throughout modern mathematical physics, appearing for example in Laplace's equation, Poisson's equation, the heat equation, the wave equation, and the Schrödinger equation. Hessian matrix While usually represents the Laplacian, sometimes also represents the Hessian matrix. The former refers to the inner product of , while the latter refers to the dyadic product of : . So whether refers to a Laplacian or a Hessian matrix depends on the context. Tensor derivative Del can also be applied to a vector field with the result being a tensor. The tensor derivative of a vector field (in three dimensions) is a 9-term second-rank tensor – that is, a 3×3 matrix – but can be denoted simply as , where represents the dyadic product. This quantity is equivalent to the transpose of the Jacobian matrix of the vector field with respect to space. The divergence of the vector field can then be expressed as the trace of this matrix. For a small displacement , the change in the vector field is given by: Product rules For vector calculus: For matrix calculus (for which can be written ): Another relation of interest (see e.g. Euler equations) is the following, where is the outer product tensor: Second derivatives When del operates on a scalar or vector, either a scalar or vector is returned. Because of the diversity of vector products (scalar, dot, cross) one application of del already gives rise to three major derivatives: the gradient (scalar product), divergence (dot product), and curl (cross product). Applying these three sorts of derivatives again to each other gives five possible second derivatives, for a scalar field f or a vector field v; the use of the scalar Laplacian and vector Laplacian gives two more: These are of interest principally because they are not always unique or independent of each other. As long as the functions are well-behaved ( in most cases), two of them are always zero: Two of them are always equal: The 3 remaining vector derivatives are related by the equation: And one of them can even be expressed with the tensor product, if the functions are well-behaved: Precautions Most of the above vector properties (except for those that rely explicitly on del's differential properties—for example, the product rule) rely only on symbol rearrangement, and must necessarily hold if the del symbol is replaced by any other vector. This is part of the value to be gained in notationally representing this operator as a vector. Though one can often replace del with a vector and obtain a vector identity, making those identities mnemonic, the reverse is not necessarily reliable, because del does not commute in general. A counterexample that demonstrates the divergence () and the advection operator () are not commutative: A counterexample that relies on del's differential properties: Central to these distinctions is the fact that del is not simply a vector; it is a vector operator. Whereas a vector is an object with both a magnitude and direction, del has neither a magnitude nor a direction until it operates on a function. For that reason, identities involving del must be derived with care, using both vector identities and differentiation identities such as the product rule.
Mathematics
Calculus and analysis
null
151942
https://en.wikipedia.org/wiki/Vaccinium%20vitis-idaea
Vaccinium vitis-idaea
Vaccinium vitis-idaea is a small evergreen shrub in the heath family, Ericaceae. It is known colloquially as the lingonberry, partridgeberry, foxberry, mountain cranberry, or cowberry. It is native to boreal forest and Arctic tundra throughout the Northern Hemisphere. Commercially cultivated in the United States Pacific Northwest and the Netherlands, the edible berries are also picked in the wild and used in various dishes, especially in Nordic cuisine. Description Vaccinium vitis-idaea spreads by underground stems to form dense clonal colonies. Slender and brittle roots grow from the underground stems. The stems are rounded in cross-section and grow from in height. Leaves grow alternately and are oval, long, with a slightly wavy margin, and sometimes with a notched tip. The flowers are bell-shaped, white to pale pink, long. V. vitis-idaea begins to produce flowers from five to ten years of age. They are pollinated by multiple insect species, including Andrena lapponica and several species of bumblebee. The fruit is a red berry across, with an acidic taste, ripening in late summer to autumn. While bitter early in the season, they sweeten if left on the branch through winter. Cytology is 2n = 24. Related species Vaccinium vitis-idaea differs from the related cranberries in having white flowers with petals partially enclosing the stamens and stigma, rather than pink flowers with petals reflexed backwards, and rounder, less pear-shaped berries. Vaccinium oxycoccos is similar. Hybrids between Vaccinium vitis-idaea and V. myrtillus, named Vaccinium × intermedium Ruthe, are occasionally found in Europe. Taxonomy Varieties There are two regional varieties or subspecies of V. vitis-idaea, one in Eurasia and one in North America, differing in leaf size: V. vitis-idaea var. vitis-idaea L.—syn. V. vitis-idaea subsp. vitis-idaea.cowberry. Eurasia. Leaves are long. V. vitis-idaea var. minus Lodd.—syn. V. vitis-idaea subsp. minus (Lodd.) Hultén.lingonberry. North America. Leaves are long. Etymology Vaccinium vitis-idaea is most commonly known in English as 'lingonberry' or 'cowberry'. The name 'lingonberry' originates from the Swedish name () for the species deriving from Old Norse lyngr, a cognate (thus also a doublet) to 'ling'. The genus name Vaccinium is a classical Latin name for a plant, possibly the bilberry or hyacinth, and may be derived from the Latin , 'berry'. The specific name is derived from Latin ('vine') and , the feminine form of (literally 'from Mount Ida', used in reference to raspberries Rubus idaeus). Worldwide, Vaccinium vitis-idaea is known by at least 25 other common English names, including: bearberry beaverberry cougarberry foxberry lowbush cranberry mountain bilberry mountain cranberry partridgeberry (in Newfoundland and Cape Breton Island) quailberry red whortleberry redberry (in Labrador and the Lower North Shore of Quebec) Distribution and habitat It is native to boreal forest and Arctic tundra throughout the Northern Hemisphere, including Eurasia and North America. Ecology Vaccinium vitis-idaea keeps its leaves all winter even in the coldest years, unusual for a broad-leaved plant, though in its natural habitat it is usually protected from severe cold by snow cover. It is extremely hardy, tolerating temperatures as low as −50 °F (−45 °C) or lower, but grows poorly where summers are hot. It prefers some shade (as from a forest canopy) and constantly moist, acidic soil. Nutrient-poor soils are tolerated but not alkaline soils. Conservation The plant is endangered in Michigan. The minus subspecies is listed as a species of special concern and believed extirpated in Connecticut. Cultivation Lingonberry has been commercially cultivated in the Netherlands and other countries since the 1960s. Some cultivars are grown for their ornamental rather than culinary value. In the United Kingdom, the Koralle Group has gained the Royal Horticultural Society's Award of Garden Merit. Uses Culinary Raw lingonberries are 86% water, 13% carbohydrates, 1% protein, and contain negligible fat. In a reference amount, lingonberries supply 54 kcal, and are low-to-moderate sources of vitamin C, B vitamins, and dietary minerals. The berries collected in the wild are a popular fruit in northern, central and eastern Europe, notably in the Nordic countries, the Baltic states, central and northern Europe. In some areas, they can be picked legally on both public and private lands in accordance with the freedom to roam. The berries are quite tart, so they are often cooked and sweetened before eating in the form of lingonberry jam, compote, juice, smoothie or syrup. The raw fruits are also frequently simply mashed with sugar, which preserves most of their nutrients and taste. This mix can be stored at room temperature in closed but not necessarily sealed containers, but in this condition, they are best preserved frozen. Fruit served this way or as compote often accompanies game and liver dishes. In Sweden the traditional Swedish meatballs are served with lingonberry jam alongside boiled or mashed potatoes and gravy sauce. In Sweden, Finland and Norway, reindeer and elk steaks are traditionally served with gravy and lingonberry sauce. Preserved fruit is commonly eaten with meatballs, as well as potato pancakes. A traditional Swedish dessert is (literally 'lingonberry pears'), consisting of fresh pears which are peeled, boiled and preserved in (lingonberry juice) and is commonly eaten during Christmas. This was very common in old times, because it was an easy and tasty way to preserve pears. In Sweden and Russia, when sugar was still a luxury item, the berries were usually preserved simply by putting them whole into bottles of water. This was known as (watered lingonberries); the procedure preserved them until next season. This was also a home remedy against scurvy. This traditional Russian soft drink, known as "lingonberry water", is mentioned by Alexander Pushkin in Eugene Onegin. In Russian folk medicine, lingonberry water was used as a mild laxative. A traditional Finnish dish is sautéed reindeer () with mashed potatoes and lingonberries on the side, either raw, thawed or as a jam. In Finland, whipped semolina pudding flavored with lingonberry () is also popular. In Poland, the berries are often mixed with pears to create a sauce served with poultry or game. The berries can also be used to replace redcurrants when creating Cumberland sauce. The berries are also popular as a wild picked fruit in Eastern Canada, for example in Newfoundland and Labrador and Cape Breton, where they are locally known as partridgeberries or redberries, and on the mainland of Nova Scotia, where they are known as foxberries. In this region they are incorporated into jams, syrups, and baked goods, such as pies, scones, and muffins. In Sweden lingonberries are often sold as jam and juice, and as a key ingredient in dishes. They are used to make Lillehammer berry liqueur; and, in East European countries, lingonberry vodka is sold, and vodka with lingonberry juice or mors is a cocktail. The berries are an important food for bears and foxes, and many fruit-eating birds. Caterpillars of the case-bearer moths Coleophora glitzella, Coleophora idaeella and Coleophora vitisella are obligate feeders on V. vitis-idaea leaves. Indigenous North American cuisine Alaska natives mix the berries with rose hip pulp and sugar to make jam, cook the berries as a sauce, and store the berries for future use. The Dakelh use the berries to make jam. The Koyukon freeze the berries for winter use. Inuit dilute and sweeten the juice to make a beverage, freeze and store the berries for spring, and use the berries to make jams and jellies. The Iñupiat use the berries to make two different desserts, one in which the berries are whipped with frozen fish eggs and eaten, and one in which raw berries are mashed with canned milk and seal oil. They also make a dish of the berries cooked with fish eggs, fish (whitefish, sheefish or pike) and blubber. The Upper Tanana boil the berries with sugar and flour to thicken; eat the raw berries, either plain or mixed with sugar, grease or a combination of the two; fry them in grease with sugar or dried fish eggs; or make them into pies, jam, and jelly. They also preserve the berries alone or in grease and store them in a birchbark basket in an underground cache, or freeze them. Use of the minus subspecies The Anticosti people use the fruit to make jams and jellies. The Nihithawak Cree store the berries by freezing them outside during the winter, mix the berries with boiled fish eggs, livers, air bladders and fat and eat them, eat the berries raw as a snack food, or stew them with fish or meat. The Iñupiat of Nelson Island eat the berries, as do the Iñupiat of the Northern Bering Sea and Arctic regions of Alaska, as well as the Inuvialuit. The Haida people, Hesquiaht First Nation, Wuikinuxv and Tsimshian all use the berries as food. Traditional medicine In traditional medicine, V. vitis-idaea was used as an apéritif and astringent. The Upper Tanana ate the berries or used their juice to treat minor respiratory disorders. Other uses The Nihithawak Cree use the berries of the minus subspecies to color porcupine quills, and put the firm, ripe berries on a string to wear as a necklace. The Western Canadian Inuit use the minus subspecies as a tobacco additive or substitute. Explanatory notes
Biology and health sciences
Berries
Plants
151965
https://en.wikipedia.org/wiki/Vernalization
Vernalization
Vernalization () is the induction of a plant's flowering process by exposure to the prolonged cold of winter, or by an artificial equivalent. After vernalization, plants have acquired the ability to flower, but they may require additional seasonal cues or weeks of growth before they will actually do so. The term is sometimes used to refer to the need of herbal (non-woody) plants for a period of cold dormancy in order to produce new shoots and leaves, but this usage is discouraged. Many plants grown in temperate climates require vernalization and must experience a period of low winter temperature to initiate or accelerate the flowering process. This ensures that reproductive development and seed production occurs in spring and winters, rather than in autumn. The needed cold is often expressed in chill hours. Typical vernalization temperatures are between 1 and 7 degrees Celsius (34 and 45 degrees Fahrenheit). For many perennial plants, such as fruit tree species, a period of cold is needed first to induce dormancy and then later, after the requisite period, re-emerge from that dormancy prior to flowering. Many monocarpic winter annuals and biennials, including some ecotypes of Arabidopsis thaliana and winter cereals such as wheat, must go through a prolonged period of cold before flowering occurs. History of vernalization research In the history of agriculture, farmers observed a traditional distinction between "winter cereals", whose seeds require chilling (to trigger their subsequent emergence and growth), and "spring cereals", whose seeds can be sown in spring, and germinate, and then flower soon thereafter. Scientists in the early 19th century had discussed how some plants needed cold temperatures to flower. In 1857 an American agriculturist John Hancock Klippart, Secretary of the Ohio Board of Agriculture, reported the importance and effect of winter temperature on the germination of wheat. One of the most significant works was by a German plant physiologist Gustav Gassner who made a detailed discussion in his 1918 paper. Gassner was the first to systematically differentiate the specific requirements of winter plants from those of summer plants, and also that early swollen germinating seeds of winter cereals are sensitive to cold. In 1928, the Soviet agronomist Trofim Lysenko published his works on the effects of cold on cereal seeds, and coined the term "яровизация" (yarovizatsiya : "jarovization") to describe a chilling process he used to make the seeds of winter cereals behave like spring cereals (from яровой : yarvoy, Tatar root ярый : yaryiy meaning ardent, fiery, associated with the god of spring). Lysenko himself translated the term into "vernalization" (from the Latin vernum meaning Spring). After Lysenko the term was used to explain the ability of flowering in some plants after a period of chilling due to physiological changes and external factors. The formal definition was given in 1960 by a French botanist P. Chouard, as "the acquisition or acceleration of the ability to flower by a chilling treatment." Lysenko's 1928 paper on vernalization and plant physiology drew wide attention due to its practical consequences for Russian agriculture. Severe cold and lack of winter snow had destroyed many early winter wheat seedlings. By treating wheat seeds with moisture as well as cold, Lysenko induced them to bear a crop when planted in spring. Later however, according to Richard Amasino, Lysenko inaccurately asserted that the vernalized state could be inherited, i.e. the offspring of a vernalized plant would behave as if they themselves had also been vernalized and would not require vernalization in order to flower quickly. Opposing this view and supporting Lysenko's claim, Xiuju Li and Yongsheng Liu have detailed experimental evidence from the USSR, Hungary, Bulgaria and China that shows the conversion between spring wheat and winter wheat, positing that "it is not unreasonable to postulate epigenetic mechanisms that could plausibly result in the conversion of spring to winter wheat or vice versa." Early research on vernalization focused on plant physiology; the increasing availability of molecular biology has made it possible to unravel its underlying mechanisms. For example, a lengthening daylight period (longer days), as well as cold temperatures are required for winter wheat plants to go from the vegetative to the reproductive state; the three interacting genes are called VRN1, VRN2, and FT (VRN3). In Arabidopsis thaliana Arabidopsis thaliana ("thale cress") is a much-studied model for vernalization. Some ecotypes (varieties), called "winter annuals", have delayed flowering without vernalization; others ("summer annuals") do not. The genes that underlie this difference in plant physiology have been intensively studied. The reproductive phase change of A. thaliana occurs by a sequence of two related events: first, the bolting transition (flower stalk elongates), then the floral transition (first flower appears). Bolting is a robust predictor of flower formation, and hence a good indicator for vernalization research. In winter annual Arabidopsis, vernalization of the meristem appears to confer competence to respond to floral inductive signals. A vernalized meristem retains competence for as long as 300 days in the absence of an inductive signal. At the molecular level, flowering is repressed by the protein Flowering Locus C (FLC), which binds to and represses genes that promote flowering, thus blocking flowering. Winter annual ecotypes of Arabidopsis have an active copy of the gene FRIGIDA (FRI), which promotes FLC expression, thus repression of flowering. Prolonged exposure to cold (vernalization) induces expression of VERNALIZATION INSENSTIVE3, which interacts with the VERNALIZATION2 (VRN2) polycomb-like complex to reduce FLC expression through chromatin remodeling. Levels of VRN2 protein increase during long-term cold exposure as a result of inhibition of VRN2 turnover via its N-degron. The events of histone deacetylation at Lysine 9 and 14 followed by methylation at Lys 9 and 27 is associated with the vernalization response. The epigenetic silencing of FLC by chromatin remodeling is also thought to involve the cold-induced expression of antisense FLC COOLAIR or COLDAIR transcripts. Vernalization is registered by the plant by the stable silencing of individual FLC loci. The removal of silent chromatin marks at FLC during embryogenesis prevents the inheritance of the vernalized state. Since vernalization also occurs in flc mutants (lacking FLC), vernalization must also activate a non-FLC pathway. A day-length mechanism is also important. Vernalization response works in concert with the photo-periodic genes CO, FT, PHYA, CRY2 to induce flowering. Devernalization It is possible to devernalize a plant by exposure to sometimes low and high temperatures subsequent to vernalization. For example, commercial onion growers store sets at low temperatures, but devernalize them before planting, because they want the plant's energy to go into enlarging its bulb (underground stem), not making flowers.
Biology and health sciences
Plant reproduction
Biology
152030
https://en.wikipedia.org/wiki/Corvette
Corvette
A corvette is a small warship. It is traditionally the smallest class of vessel considered to be a proper (or "rated") warship. The warship class above the corvette is that of the frigate, while the class below was historically that of the sloop-of-war. The modern roles that a corvette fulfills include coastal patrol craft, missile boat and fast attack craft. These corvettes are typically between 500 and 2,000 tons. Recent designs of corvettes may approach 3,000 tons and include a hangar to accommodate a helicopter, having size and capabilities that overlap with smaller frigates. However unlike contemporary frigates, a modern corvette does not have sufficient endurance or seaworthiness for long voyages. The word "corvette" is first found in Middle French, a diminutive of the Dutch word corf, meaning a "basket", from the Latin corbis. The rank "corvette captain", equivalent in many navies to "lieutenant commander", derives from the name of this type of ship. The rank is the most junior of three "captain" ranks in several European (e.g.; France, Spain, Italy, Germany, Croatia) and South American (e.g., Argentina, Chile, Brazil, Colombia) navies, because a corvette, as the smallest class of rated warship, was traditionally the smallest class of vessel entitled to a commander of a "captain" rank. Sailing vessels During the Age of Sail, corvettes were one of many types of warships smaller than a frigate and with a single deck of guns. They were very closely related to sloops-of-war. The role of the corvette consisted mostly of coastal patrol, fighting minor wars, supporting large fleets, or participating in show-the-flag missions. The English Navy began using small ships in the 1650s, but described them as sloops rather than corvettes. The first reference to a corvette was with the French Navy in the 1670s, which may be where the term originated. The French Navy's corvettes grew over the decades and by the 1780s they were ships of 20 guns or so, approximately equivalent to the British Navy's post ships. The British Navy did not adopt the term until the 1830s, long after the Napoleonic Wars, to describe a small sixth-rate vessel somewhat larger than a sloop. The last vessel lost by France during the American Revolutionary War was the corvette Le Dragon, scuttled by her captain to avoid capture off Monte Cristi, Haïti in January 1783. Most corvettes and sloops of the 17th century were in length and measured 40 to 70 tons burthen. They carried four to eight smaller guns on single decks. Over time, vessels of increasing size and capability were called "corvettes"; by 1800, they reached lengths of over and measured from 400 to 600 tons burthen. Steam ships Ships during the steam era became much faster and more manoeuvrable than their sail ancestors. Corvettes during this era were typically used alongside gunboats during colonial missions. Battleships and other large vessels were unnecessary when fighting the indigenous people of the Far East and Africa. World War II The modern corvette appeared during World War II as an easily-built patrol and convoy escort vessel. The British naval designer William Reed drew up a small ship based on the single-shaft Smiths Dock Company whale catcher , whose simple design and mercantile construction standards lent itself to rapid production in large numbers in small yards unused to naval work. First Lord of the Admiralty Winston Churchill, later Prime Minister, had a hand in reviving the name "corvette". During the arms buildup leading to World War II, the term "corvette" was almost attached to the . The Tribals were so much larger than and sufficiently different from other British destroyers that some consideration was given to resurrecting the classification of "corvette" and applying it to them. This idea was dropped, and the term applied to small, mass-produced antisubmarine escorts such as the of World War II. (Royal Navy ships were named after flowers, and ships in Royal Canadian Navy service took the name of smaller Canadian cities and towns.) Their chief duty was to protect convoys throughout the Battle of the Atlantic and on the routes from the UK to Murmansk carrying supplies to the Soviet Union. The Flower-class corvette was originally designed for offshore patrol work, and was not ideal when pressed into service as an antisubmarine escort. It was shorter than ideal for oceangoing convoy escort work, too lightly armed for antiaircraft defense, and the ships were barely faster than the merchantmen they escorted. This was a particular problem given the faster German U-boat designs then emerging. Nonetheless, the ship was quite seaworthy and maneuverable, but living conditions for ocean voyages were challenging. As a result of these shortcomings, the corvette was superseded in the Royal Navy as the escort ship of choice by the frigate, which was larger, faster, better armed, and had two shafts. However, many small yards could not produce vessels of frigate size, so an improved corvette design, the , was introduced later in the war, with some remaining in service until the mid-1950s. The Royal Australian Navy built 60 s, including 20 for the Royal Navy crewed by Australians, and four for the Indian Navy. These were officially described as Australian minesweepers, or as minesweeping sloops by the Royal Navy, and were named after Australian towns. The s or trawlers were referred to as corvettes in the Royal New Zealand Navy, and two, and , rammed and sank a much larger Japanese submarine, , in 1943 in the Solomon Islands. In Italy, the Regia Marina, in dire need of escort vessels for its convoys, designed the , of which 29 were built between 1942 and 1943 (out of 60 planned); they proved apt at operations in the Mediterranean Sea, especially in regards to their anti-air and anti-submarine capability, and were so successful that the class survived after the war into the Marina Militare Italiana until 1972. Modern corvettes Modern navies began a trend in the late 20th and early 21st centuries of building corvettes geared towards smaller more manoeuvrable surface capability. These corvettes have displacements between and measure in length. They are usually armed with medium- and small-calibre guns, surface-to-surface missiles, surface-to-air missiles (SAM), and anti-submarine weapons. Many can accommodate a small or medium anti-submarine warfare helicopter, with the larger ones also having a hangar. While the size and capabilities of the largest corvettes overlap with smaller frigates, corvettes are designed primarily for littoral deployment while frigates are ocean-going vessels by virtue of their greater endurance and seaworthiness. Most countries with coastlines can build corvette-sized ships, either as part of their commercial shipbuilding activities or in purpose-built yards, but the sensors, weapons, and other systems required for a surface combatant are more specialized and are around 60% of the total cost. These components are purchased on the international market. Current corvette classes Many countries today operate corvettes. Countries that border smaller seas, such as the Baltic Sea or the Persian Gulf, are more likely to build the smaller and more manoeuvrable corvettes, with Russia operating the most corvettes in the world. In the 1960s, the Portuguese Navy designed the s as multi-role small frigates intended to be affordable for a small navy. The João Coutinho class soon inspired a series of similar projects – including the Spanish , the German MEKO 140, the French A69 and the Portuguese – adopted by a number of medium- and small-sized navies. The first operational corvette based on stealth technology was the Royal Norwegian Navy's . The Swedish Navy introduced the similarly stealthy . Finland has plans to build four multi-role corvettes, currently dubbed the , in the 2020s as part of its navy's Project Squadron 2020. The corvettes will have helicopter carrying, mine laying, ice breaking, anti-aircraft and anti-ship abilities. They will be over long and cost a total of 1.2 billion euros. The new German Navy is designed to replace Germany's fast attack craft and also incorporates stealth technology and land attack capabilities. The Israeli Navy has ordered four of these, named s and a more heavily armed version of the type, deliveries commenced in 2019. The Greek Navy has categorised the class as fast attack missile craft. A similar vessel is the fast attack missile craft of the Turkish Navy, which is classified as a corvette by Lürssen Werft, the German ship designer. The Indian Navy operates four s built by Garden Reach Shipbuilders and Engineers. All of them were in service by 2017. The Israeli Navy operates three s. Built in the U.S. to an Israeli design, they each carry one helicopter and are well-armed with offensive and defensive weapons systems, including the Barak 8 SAM, and advanced electronic sensors and countermeasures. They displace over 1,200 tons at full load. Turkey began to build MİLGEM-class corvettes in 2005. The MİLGEM class is designed for anti-submarine warfare and littoral patrol duty. The lead ship, TCG Heybeliada, entered navy service in 2011. The design concept and mission profile of the MİLGEM class is similar to the of littoral combat ships of the United States. In 2004, to replace the patrol boat, the United Arab Emirates Ministry of Defence awarded a contract to Abu Dhabi Ship Building for the of corvettes. This class is based on the CMN Group's Combattante BR70 design. The Baynunah class is designed for patrol and surveillance, minelaying, interception and other anti-surface warfare operations in the United Arab Emirates territorial waters and exclusive economic zone. The United States is developing littoral combat ships, which are essentially large corvettes, their spacious hulls permitting space for mission modules, allowing them to undertake tasks formerly assigned to specialist classes such as minesweepers or the anti-submarine . Current operators operates three s, four s. operates six s operates two s. operates two modified s, purchased from the United Kingdom, which was upgraded to guided-missile corvettes. operates four s purchased from Italy. operates two and one Imperial Marinheiro-class corvette. operates two s and eleven corvettes. operates a single purchased from South Korea. operates six s. operates four s. operates a single . operates five s as of 2024. operates one , seven , two , four and four s operates 14 s purchased from Germany, three s, three s, four s, and one presidential corvette . operates three s. operates two s and a single . navy of the Islamic Revolutionary Guard Corps has 3 Shahid Soleimani class corvettes and 1 Abu Mahdi Al-Muhandis class corvette operates two s. operates four s, two s, and one Amnok-class corvette. operates six Kedah-class corvettes, two s, and four s. operates three s. operates six s. operates three s, and two s. has operates two with two more ships on order, besides one modified , with three more ships on order. operates six s. operates three s purchased from the United Kingdom, two Pohang-class corvettes, and a single . operates a single and a single Kaszub-class corvette. operates one and one . operates four s. operates two s, and two s. operates 20 s, six s, three s, ten Buyan-M-class corvettes, three s, eight s (classed as frigates by NATO), a single (also classed as a frigate by NATO), and two Bora-class corvettes. operates a single . operates two Al Jubail-class corvettes, and four s. operates six s. operates five s and two s. operates three s, one , and one s. operates four s. operates a single . operates a single . operates six s, two s, and a single . operates two s and one Tarantul-class corvette operates single Pauk-class corvette operates three ships operates seven s and one operates 21 ships operates two ships operates 12 ships operates two ships operates single ships purchased from South Korea operates single ship purchased from South Korea operates five ships operates two ships donated from South Korea operates two ships purchased from South Korea operates two ships purchased from South Korea operates two ships purchased from Spain operates single ship operates single ship Jiangdao-class corvette operates four ships ordered from China operates 50 ships operates two ships ordered from China operates four s operates five ships operates two ships operates one ship donated by India operates three ships operates eight ships Former operators decommissioned its last in 1960. returned both its s to the United Kingdom in 1944. decommissioned all its s and s in 1945, following World War II. decommissioned its last in 1967. decommissioned its last in 2009. decommissioned its last in 1979. decommissioned its last Turunmaa-class corvette in 2002. sold all of its 16 s to Indonesia in 1992. decommissioned its two s in 1995. decommissioned its last in 1952. decommissioned its two s in 2022. decommissioned its last in 2019. decommissioned both its s in 2009. decommissioned its lone in 2012. decommissioned its last in 1958. decommissioned both its s in 1948. decommissioned its lone in 1967. decommissioned its last in 1996. last Vinnytsia was sunk in Ochakiv in 2022. decommissioned all its s in 1945 following World War II. decommissioned its lone in 1975. decommissioned its last in 1962. returned its lone to the United Kingdom in 1949. Future development will receive three s from Russia and six Jiangdao-class corvettes from China. will receive three s from the United Arab Emirates. is planning to build 11 more s. is will commission three more Gowind-class corvettes. is currently planning to build four s. is a partner nation in the European Patrol Corvette project. is building an additional five s. is a partner nation in the European Patrol Corvette project. Greece is also planning on receiving a number of Themistocles-class corvettes, a variant of the Israeli Sa'ar 72 class. Greece has also ordered three Gowind 2500-class corvettes from France. has begun research into its NGC (Next-Gen Corvette) project. India is also building 16 Anti-Submarine Warfare Shallow Water Craft (ASW-SWC) corvette, and has signed contracts to build a further 6 corvettes under Next Generation Missile Vessels project. has approved the procurement proposal of up to three s from South Korea. Islamic Revolutionary Guard Corps Navy 1 Shahid Soleimani class corvette and 3 Abu Mahdi Al-Muhandis class corvettes are under construction is currently building an additional two s. Israel is also planning a number of new s. is leading the development of the European Patrol Corvette in a joint project with other European Union partners. has ordered four s from Turkey. purchased an additional from South Korea, but is awaiting transfer due to lack of funding. The Philippines have also ordered two new corvettes from Hyundai. is a partner nation in the European Patrol Corvette project. has ordered four Luleå-class vessels. has ordered four Gowind-class corvettes. is currently building corvettes in six separate classes, including: the Karakurt class, Buyan-M-class, Bykov class, Steregushchiy class, Gremyashchiy class and Derzky class (the latter three classed as frigates by NATO). has ordered an unspecified number of s from Turkey. has ordered two Gowind-class corvettes. Museum ships (Replica), 1854, in Iquique, Chile , 1874 steam and sail barque, Buenos Aires, Argentina , 1941 , Williamstown, Victoria, Australia , 1955 , Belém, Para, Brazil , 1941 , Halifax, Nova Scotia, Canada , 1941 , Whyalla, South Australia, Australia , 1968 corvette, Turku, Finland in Diu, India in Samut Prakan Province, Thailand. , a in Pohang, South Korea. , a in Jinhae, South Korea. , a , was sunk by a North Korean submarine on March 26, 2010, and later raised, is on display in Pyeongtaek, South Korea. , a in Kronstadt, Russia. , 1986 in Peenemünde, Germany. Former museum ships , 1984 missile corvette, Fall River, Massachusetts, US - Scrapped in 2023 due to severe hull deterioration. , 1955 , Porto Velho, Brazil - Scrapped in 2023, after partially sinking at her moorings.
Technology
Naval warfare
null
152205
https://en.wikipedia.org/wiki/Forcing%20%28mathematics%29
Forcing (mathematics)
In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. Intuitively, forcing can be thought of as a technique to expand the set theoretical universe to a larger universe by introducing a new "generic" object . Forcing was first used by Paul Cohen in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. It has been considerably reworked and simplified in the following years, and has since served as a powerful technique, both in set theory and in areas of mathematical logic such as recursion theory. Descriptive set theory uses the notions of forcing from both recursion theory and set theory. Forcing has also been used in model theory, but it is common in model theory to define genericity directly without mention of forcing. Intuition Forcing is usually used to construct an expanded universe that satisfies some desired property. For example, the expanded universe might contain many new real numbers (at least of them), identified with subsets of the set of natural numbers, that were not there in the old universe, and thereby violate the continuum hypothesis. In order to intuitively justify such an expansion, it is best to think of the "old universe" as a model of the set theory, which is itself a set in the "real universe" . By the Löwenheim–Skolem theorem, can be chosen to be a "bare bones" model that is externally countable, which guarantees that there will be many subsets (in ) of that are not in . Specifically, there is an ordinal that "plays the role of the cardinal " in , but is actually countable in . Working in , it should be easy to find one distinct subset of per each element of . (For simplicity, this family of subsets can be characterized with a single subset .) However, in some sense, it may be desirable to "construct the expanded model within ". This would help ensure that "resembles" in certain aspects, such as being the same as (more generally, that cardinal collapse does not occur), and allow fine control over the properties of . More precisely, every member of should be given a (non-unique) name in . The name can be thought as an expression in terms of , just like in a simple field extension every element of can be expressed in terms of . A major component of forcing is manipulating those names within , so sometimes it may help to directly think of as "the universe", knowing that the theory of forcing guarantees that will correspond to an actual model. A subtle point of forcing is that, if is taken to be an arbitrary "missing subset" of some set in , then the constructed "within " may not even be a model. This is because may encode "special" information about that is invisible within (e.g. the countability of ), and thus prove the existence of sets that are "too complex for to describe". Forcing avoids such problems by requiring the newly introduced set to be a generic set relative to . Some statements are "forced" to hold for any generic : For example, a generic is "forced" to be infinite. Furthermore, any property (describable in ) of a generic set is "forced" to hold under some forcing condition. The concept of "forcing" can be defined within , and it gives enough reasoning power to prove that is indeed a model that satisfies the desired properties. Cohen's original technique, now called ramified forcing, is slightly different from the unramified forcing expounded here. Forcing is also equivalent to the method of Boolean-valued models, which some feel is conceptually more natural and intuitive, but usually much more difficult to apply. The role of the model In order for the above approach to work smoothly, must in fact be a standard transitive model in , so that membership and other elementary notions can be handled intuitively in both and . A standard transitive model can be obtained from any standard model through the Mostowski collapse lemma, but the existence of any standard model of (or any variant thereof) is in itself a stronger assumption than the consistency of . To get around this issue, a standard technique is to let be a standard transitive model of an arbitrary finite subset of (any axiomatization of has at least one axiom schema, and thus an infinite number of axioms), the existence of which is guaranteed by the reflection principle. As the goal of a forcing argument is to prove consistency results, this is enough since any inconsistency in a theory must manifest with a derivation of a finite length, and thus involve only a finite number of axioms. Forcing conditions and forcing posets Each forcing condition can be regarded as a finite piece of information regarding the object adjoined to the model. There are many different ways of providing information about an object, which give rise to different forcing notions. A general approach to formalizing forcing notions is to regard forcing conditions as abstract objects with a poset structure. A forcing poset is an ordered triple, , where is a preorder on , and is the largest element. Members of are the forcing conditions (or just conditions). The order relation means " is stronger than ". (Intuitively, the "smaller" condition provides "more" information, just as the smaller interval provides more information about the number than the interval does.) Furthermore, the preorder must be atomless, meaning that it must satisfy the splitting condition: For each , there are such that , with no such that . In other words, it must be possible to strengthen any forcing condition in at least two incompatible directions. Intuitively, this is because is only a finite piece of information, whereas an infinite piece of information is needed to determine . There are various conventions in use. Some authors require to also be antisymmetric, so that the relation is a partial order. Some use the term partial order anyway, conflicting with standard terminology, while some use the term preorder. The largest element can be dispensed with. The reverse ordering is also used, most notably by Saharon Shelah and his co-authors. Examples Let be any infinite set (such as ), and let the generic object in question be a new subset . In Cohen's original formulation of forcing, each forcing condition is a finite set of sentences, either of the form or , that are self-consistent (i.e. and for the same value of do not appear in the same condition). This forcing notion is usually called Cohen forcing. The forcing poset for Cohen forcing can be formally written as , the finite partial functions from to under reverse inclusion. Cohen forcing satisfies the splitting condition because given any condition , one can always find an element not mentioned in , and add either the sentence or to to get two new forcing conditions, incompatible with each other. Another instructive example of a forcing poset is , where and is the collection of Borel subsets of having non-zero Lebesgue measure. The generic object associated with this forcing poset is a random real number . It can be shown that falls in every Borel subset of with measure 1, provided that the Borel subset is "described" in the original unexpanded universe (this can be formalized with the concept of Borel codes). Each forcing condition can be regarded as a random event with probability equal to its measure. Due to the ready intuition this example can provide, probabilistic language is sometimes used with other divergent forcing posets. Generic filters Even though each individual forcing condition cannot fully determine the generic object , the set of all true forcing conditions does determine . In fact, without loss of generality, is commonly considered to be the generic object adjoined to , so the expanded model is called . It is usually easy enough to show that the originally desired object is indeed in the model . Under this convention, the concept of "generic object" can be described in a general way. Specifically, the set should be a generic filter on relative to . The "filter" condition means that it makes sense that is a set of all true forcing conditions: if , then if , then there exists an such that For to be "generic relative to " means: If is a "dense" subset of (that is, for each , there exists a such that ), then . Given that is a countable model, the existence of a generic filter follows from the Rasiowa–Sikorski lemma. In fact, slightly more is true: Given a condition , one can find a generic filter such that . Due to the splitting condition on , if is a filter, then is dense. If , then because is a model of . For this reason, a generic filter is never in . P-names and interpretations Associated with a forcing poset is the class of -names. A -name is a set of the form Given any filter on , the interpretation or valuation map from -names is given by The -names are, in fact, an expansion of the universe. Given , one defines to be the -name Since , it follows that . In a sense, is a "name for " that does not depend on the specific choice of . This also allows defining a "name for " without explicitly referring to : so that . Rigorous definitions The concepts of -names, interpretations, and may be defined by transfinite recursion. With the empty set, the successor ordinal to ordinal , the power-set operator, and a limit ordinal, define the following hierarchy: Then the class of -names is defined as The interpretation map and the map can similarly be defined with a hierarchical construction. Forcing Given a generic filter , one proceeds as follows. The subclass of -names in is denoted . Let To reduce the study of the set theory of to that of , one works with the "forcing language", which is built up like ordinary first-order logic, with membership as the binary relation and all the -names as constants. Define (to be read as " forces in the model with poset "), where is a condition, is a formula in the forcing language, and the 's are -names, to mean that if is a generic filter containing , then . The special case is often written as "" or simply "". Such statements are true in , no matter what is. What is important is that this external definition of the forcing relation is equivalent to an internal definition within , defined by transfinite induction (specifically -induction) over the -names on instances of and , and then by ordinary induction over the complexity of formulae. This has the effect that all the properties of are really properties of , and the verification of in becomes straightforward. This is usually summarized as the following three key properties: Truth: if and only if it is forced by , that is, for some condition , we have . Definability: The statement "" is definable in . Coherence: . Internal definition There are many different but equivalent ways to define the forcing relation in . One way to simplify the definition is to first define a modified forcing relation that is strictly stronger than . The modified relation still satisfies the three key properties of forcing, but and are not necessarily equivalent even if the first-order formulae and are equivalent. The unmodified forcing relation can then be defined as In fact, Cohen's original concept of forcing is essentially rather than . The modified forcing relation can be defined recursively as follows: means means means means means Other symbols of the forcing language can be defined in terms of these symbols: For example, means , means , etc. Cases 1 and 2 depend on each other and on case 3, but the recursion always refer to -names with lesser ranks, so transfinite induction allows the definition to go through. By construction, (and thus ) automatically satisfies Definability. The proof that also satisfies Truth and Coherence is by inductively inspecting each of the five cases above. Cases 4 and 5 are trivial (thanks to the choice of and as the elementary symbols), cases 1 and 2 relies only on the assumption that is a filter, and only case 3 requires to be a generic filter. Formally, an internal definition of the forcing relation (such as the one presented above) is actually a transformation of an arbitrary formula to another formula where and are additional variables. The model does not explicitly appear in the transformation (note that within , just means " is a -name"), and indeed one may take this transformation as a "syntactic" definition of the forcing relation in the universe of all sets regardless of any countable transitive model. However, if one wants to force over some countable transitive model , then the latter formula should be interpreted under (i.e. with all quantifiers ranging only over ), in which case it is equivalent to the external "semantic" definition of described at the top of this section: For any formula there is a theorem of the theory (for example conjunction of finite number of axioms) such that for any countable transitive model such that and any atomless partial order and any -generic filter over This the sense under which the forcing relation is indeed "definable in ". Consistency The discussion above can be summarized by the fundamental consistency result that, given a forcing poset , we may assume the existence of a generic filter , not belonging to the universe , such that is again a set-theoretic universe that models . Furthermore, all truths in may be reduced to truths in involving the forcing relation. Both styles, adjoining to either a countable transitive model or the whole universe , are commonly used. Less commonly seen is the approach using the "internal" definition of forcing, in which no mention of set or class models is made. This was Cohen's original method, and in one elaboration, it becomes the method of Boolean-valued analysis. Cohen forcing The simplest nontrivial forcing poset is , the finite partial functions from to under reverse inclusion. That is, a condition is essentially two disjoint finite subsets and of , to be thought of as the "yes" and "no" parts of with no information provided on values outside the domain of . " is stronger than " means that , in other words, the "yes" and "no" parts of are supersets of the "yes" and "no" parts of , and in that sense, provide more information. Let be a generic filter for this poset. If and are both in , then is a condition because is a filter. This means that is a well-defined partial function from to because any two conditions in agree on their common domain. In fact, is a total function. Given , let . Then is dense. (Given any , if is not in 's domain, adjoin a value for —the result is in .) A condition has in its domain, and since , we find that is defined. Let , the set of all "yes" members of the generic conditions. It is possible to give a name for directly. Let Then Now suppose that in . We claim that . Let Then is dense. (Given any , find that is not in its domain, and adjoin a value for contrary to the status of "".) Then any witnesses . To summarize, is a "new" subset of , necessarily infinite. Replacing with , that is, consider instead finite partial functions whose inputs are of the form , with and , and whose outputs are or , one gets new subsets of . They are all distinct, by a density argument: Given , let then each is dense, and a generic condition in it proves that the αth new set disagrees somewhere with the th new set. This is not yet the falsification of the continuum hypothesis. One must prove that no new maps have been introduced which map onto , or onto . For example, if one considers instead , finite partial functions from to , the first uncountable ordinal, one gets in a bijection from to . In other words, has collapsed, and in the forcing extension, is a countable ordinal. The last step in showing the independence of the continuum hypothesis, then, is to show that Cohen forcing does not collapse cardinals. For this, a sufficient combinatorial property is that all of the antichains of the forcing poset are countable. The countable chain condition An (strong) antichain of is a subset such that if and , then and are incompatible (written ), meaning there is no in such that and . In the example on Borel sets, incompatibility means that has zero measure. In the example on finite partial functions, incompatibility means that is not a function, in other words, and assign different values to some domain input. satisfies the countable chain condition (c.c.c.) if and only if every antichain in is countable. (The name, which is obviously inappropriate, is a holdover from older terminology. Some mathematicians write "c.a.c." for "countable antichain condition".) It is easy to see that satisfies the c.c.c. because the measures add up to at most . Also, satisfies the c.c.c., but the proof is more difficult. Given an uncountable subfamily , shrink to an uncountable subfamily of sets of size , for some . If for uncountably many , shrink this to an uncountable subfamily and repeat, getting a finite set and an uncountable family of incompatible conditions of size such that every is in for at most countable many . Now, pick an arbitrary , and pick from any that is not one of the countably many members that have a domain member in common with . Then and are compatible, so is not an antichain. In other words, -antichains are countable. The importance of antichains in forcing is that for most purposes, dense sets and maximal antichains are equivalent. A maximal antichain is one that cannot be extended to a larger antichain. This means that every element is compatible with some member of . The existence of a maximal antichain follows from Zorn's Lemma. Given a maximal antichain , let Then is dense, and if and only if . Conversely, given a dense set , Zorn's Lemma shows that there exists a maximal antichain , and then if and only if . Assume that satisfies the c.c.c. Given , with a function in , one can approximate inside as follows. Let be a name for (by the definition of ) and let be a condition that forces to be a function from to . Define a function , by By the definability of forcing, this definition makes sense within . By the coherence of forcing, a different come from an incompatible . By c.c.c., is countable. In summary, is unknown in as it depends on , but it is not wildly unknown for a c.c.c.-forcing. One can identify a countable set of guesses for what the value of is at any input, independent of . This has the following very important consequence. If in , is a surjection from one infinite ordinal onto another, then there is a surjection in , and consequently, a surjection in . In particular, cardinals cannot collapse. The conclusion is that in . Easton forcing The exact value of the continuum in the above Cohen model, and variants like for cardinals in general, was worked out by Robert M. Solovay, who also worked out how to violate (the generalized continuum hypothesis), for regular cardinals only, a finite number of times. For example, in the above Cohen model, if holds in , then holds in . William B. Easton worked out the proper class version of violating the for regular cardinals, basically showing that the known restrictions, (monotonicity, Cantor's Theorem and König's Theorem), were the only -provable restrictions (see Easton's Theorem). Easton's work was notable in that it involved forcing with a proper class of conditions. In general, the method of forcing with a proper class of conditions fails to give a model of . For example, forcing with , where is the proper class of all ordinals, makes the continuum a proper class. On the other hand, forcing with introduces a countable enumeration of the ordinals. In both cases, the resulting is visibly not a model of . At one time, it was thought that more sophisticated forcing would also allow an arbitrary variation in the powers of singular cardinals. However, this has turned out to be a difficult, subtle and even surprising problem, with several more restrictions provable in and with the forcing models depending on the consistency of various large-cardinal properties. Many open problems remain. Random reals Random forcing can be defined as forcing over the set of all compact subsets of of positive measure ordered by relation (smaller set in context of inclusion is smaller set in ordering and represents condition with more information). There are two types of important dense sets: For any positive integer the set is dense, where is diameter of the set . For any Borel subset of measure 1, the set is dense. For any filter and for any finitely many elements there is such that holds . In case of this ordering, this means that any filter is set of compact sets with finite intersection property. For this reason, intersection of all elements of any filter is nonempty. If is a filter intersecting the dense set for any positive integer , then the filter contains conditions of arbitrarily small positive diameter. Therefore, the intersection of all conditions from has diameter 0. But the only nonempty sets of diameter 0 are singletons. So there is exactly one real number such that . Let be any Borel set of measure 1. If intersects , then . However, a generic filter over a countable transitive model is not in . The real defined by is provably not an element of . The problem is that if , then " is compact", but from the viewpoint of some larger universe , can be non-compact and the intersection of all conditions from the generic filter is actually empty. For this reason, we consider the set of topological closures of conditions from G (i.e., ). Because of and the finite intersection property of , the set also has the finite intersection property. Elements of the set are bounded closed sets as closures of bounded sets. Therefore, is a set of compact sets with the finite intersection property and thus has nonempty intersection. Since and the ground model inherits a metric from the universe , the set has elements of arbitrarily small diameter. Finally, there is exactly one real that belongs to all members of the set . The generic filter can be reconstructed from as . If is name of , and for holds " is Borel set of measure 1", then holds for some . There is name such that for any generic filter holds Then holds for any condition . Every Borel set can, non-uniquely, be built up, starting from intervals with rational endpoints and applying the operations of complement and countable unions, a countable number of times. The record of such a construction is called a Borel code. Given a Borel set in , one recovers a Borel code, and then applies the same construction sequence in , getting a Borel set . It can be proven that one gets the same set independent of the construction of , and that basic properties are preserved. For example, if , then . If has measure zero, then has measure zero. This mapping is injective. For any set such that and " is a Borel set of measure 1" holds . This means that is "infinite random sequence of 0s and 1s" from the viewpoint of , which means that it satisfies all statistical tests from the ground model . So given , a random real, one can show that Because of the mutual inter-definability between and , one generally writes for . A different interpretation of reals in was provided by Dana Scott. Rational numbers in have names that correspond to countably-many distinct rational values assigned to a maximal antichain of Borel sets – in other words, a certain rational-valued function on . Real numbers in then correspond to Dedekind cuts of such functions, that is, measurable functions. Boolean-valued models Perhaps more clearly, the method can be explained in terms of Boolean-valued models. In these, any statement is assigned a truth value from some complete atomless Boolean algebra, rather than just a true/false value. Then an ultrafilter is picked in this Boolean algebra, which assigns values true/false to statements of our theory. The point is that the resulting theory has a model that contains this ultrafilter, which can be understood as a new model obtained by extending the old one with this ultrafilter. By picking a Boolean-valued model in an appropriate way, we can get a model that has the desired property. In it, only statements that must be true (are "forced" to be true) will be true, in a sense (since it has this extension/minimality property). Meta-mathematical explanation In forcing, we usually seek to show that some sentence is consistent with (or optionally some extension of ). One way to interpret the argument is to assume that is consistent and then prove that combined with the new sentence is also consistent. Each "condition" is a finite piece of information – the idea is that only finite pieces are relevant for consistency, since, by the compactness theorem, a theory is satisfiable if and only if every finite subset of its axioms is satisfiable. Then we can pick an infinite set of consistent conditions to extend our model. Therefore, assuming the consistency of , we prove the consistency of extended by this infinite set. Logical explanation By Gödel's second incompleteness theorem, one cannot prove the consistency of any sufficiently strong formal theory, such as , using only the axioms of the theory itself, unless the theory is inconsistent. Consequently, mathematicians do not attempt to prove the consistency of using only the axioms of , or to prove that is consistent for any hypothesis using only . For this reason, the aim of a consistency proof is to prove the consistency of relative to the consistency of . Such problems are known as problems of relative consistency, one of which proves The general schema of relative consistency proofs follows. As any proof is finite, it uses only a finite number of axioms: For any given proof, can verify the validity of this proof. This is provable by induction on the length of the proof. Then resolve By proving the following it can be concluded that which is equivalent to which gives (*). The core of the relative consistency proof is proving (**). A proof of can be constructed for any given finite subset of the axioms (by instruments of course). (No universal proof of of course.) In , it is provable that for any condition , the set of formulas (evaluated by names) forced by is deductively closed. Furthermore, for any axiom, proves that this axiom is forced by . Then it suffices to prove that there is at least one condition that forces . In the case of Boolean-valued forcing, the procedure is similar: proving that the Boolean value of is not . Another approach uses the Reflection Theorem. For any given finite set of axioms, there is a proof that this set of axioms has a countable transitive model. For any given finite set of axioms, there is a finite set of axioms such that proves that if a countable transitive model satisfies , then satisfies . By proving that there is finite set of axioms such that if a countable transitive model satisfies , then satisfies the hypothesis . Then, for any given finite set of axioms, proves . Sometimes in (**), a stronger theory than is used for proving . Then we have proof of the consistency of relative to the consistency of . Note that , where is (the axiom of constructibility).
Mathematics
Set theory
null
152207
https://en.wikipedia.org/wiki/Compactness%20theorem
Compactness theorem
In mathematical logic, the compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This theorem is an important tool in model theory, as it provides a useful (but generally not effective) method for constructing models of any set of sentences that is finitely consistent. The compactness theorem for the propositional calculus is a consequence of Tychonoff's theorem (which says that the product of compact spaces is compact) applied to compact Stone spaces, hence the theorem's name. Likewise, it is analogous to the finite intersection property characterization of compactness in topological spaces: a collection of closed sets in a compact space has a non-empty intersection if every finite subcollection has a non-empty intersection. The compactness theorem is one of the two key properties, along with the downward Löwenheim–Skolem theorem, that is used in Lindström's theorem to characterize first-order logic. Although there are some generalizations of the compactness theorem to non-first-order logics, the compactness theorem itself does not hold in them, except for a very limited number of examples. History Kurt Gödel proved the countable compactness theorem in 1930. Anatoly Maltsev proved the uncountable case in 1936. Applications The compactness theorem has many applications in model theory; a few typical results are sketched here. Robinson's principle The compactness theorem implies the following result, stated by Abraham Robinson in his 1949 dissertation. Robinson's principle: If a first-order sentence holds in every field of characteristic zero, then there exists a constant such that the sentence holds for every field of characteristic larger than This can be seen as follows: suppose is a sentence that holds in every field of characteristic zero. Then its negation together with the field axioms and the infinite sequence of sentences is not satisfiable (because there is no field of characteristic 0 in which holds, and the infinite sequence of sentences ensures any model would be a field of characteristic 0). Therefore, there is a finite subset of these sentences that is not satisfiable. must contain because otherwise it would be satisfiable. Because adding more sentences to does not change unsatisfiability, we can assume that contains the field axioms and, for some the first sentences of the form Let contain all the sentences of except Then any field with a characteristic greater than is a model of and together with is not satisfiable. This means that must hold in every model of which means precisely that holds in every field of characteristic greater than This completes the proof. The Lefschetz principle, one of the first examples of a transfer principle, extends this result. A first-order sentence in the language of rings is true in (or equivalently, in ) algebraically closed field of characteristic 0 (such as the complex numbers for instance) if and only if there exist infinitely many primes for which is true in algebraically closed field of characteristic in which case is true in algebraically closed fields of sufficiently large non-0 characteristic One consequence is the following special case of the Ax–Grothendieck theorem: all injective complex polynomials are surjective (indeed, it can even be shown that its inverse will also be a polynomial). In fact, the surjectivity conclusion remains true for any injective polynomial where is a finite field or the algebraic closure of such a field. Upward Löwenheim–Skolem theorem A second application of the compactness theorem shows that any theory that has arbitrarily large finite models, or a single infinite model, has models of arbitrary large cardinality (this is the Upward Löwenheim–Skolem theorem). So for instance, there are nonstandard models of Peano arithmetic with uncountably many 'natural numbers'. To achieve this, let be the initial theory and let be any cardinal number. Add to the language of one constant symbol for every element of Then add to a collection of sentences that say that the objects denoted by any two distinct constant symbols from the new collection are distinct (this is a collection of sentences). Since every subset of this new theory is satisfiable by a sufficiently large finite model of or by any infinite model, the entire extended theory is satisfiable. But any model of the extended theory has cardinality at least . Non-standard analysis A third application of the compactness theorem is the construction of nonstandard models of the real numbers, that is, consistent extensions of the theory of the real numbers that contain "infinitesimal" numbers. To see this, let be a first-order axiomatization of the theory of the real numbers. Consider the theory obtained by adding a new constant symbol to the language and adjoining to the axiom and the axioms for all positive integers Clearly, the standard real numbers are a model for every finite subset of these axioms, because the real numbers satisfy everything in and, by suitable choice of can be made to satisfy any finite subset of the axioms about By the compactness theorem, there is a model that satisfies and also contains an infinitesimal element A similar argument, this time adjoining the axioms etc., shows that the existence of numbers with infinitely large magnitudes cannot be ruled out by any axiomatization of the reals. It can be shown that the hyperreal numbers satisfy the transfer principle: a first-order sentence is true of if and only if it is true of Proofs One can prove the compactness theorem using Gödel's completeness theorem, which establishes that a set of sentences is satisfiable if and only if no contradiction can be proven from it. Since proofs are always finite and therefore involve only finitely many of the given sentences, the compactness theorem follows. In fact, the compactness theorem is equivalent to Gödel's completeness theorem, and both are equivalent to the Boolean prime ideal theorem, a weak form of the axiom of choice. Gödel originally proved the compactness theorem in just this way, but later some "purely semantic" proofs of the compactness theorem were found; that is, proofs that refer to but not to . One of those proofs relies on ultraproducts hinging on the axiom of choice as follows: Proof: Fix a first-order language and let be a collection of -sentences such that every finite subcollection of -sentences, of it has a model Also let be the direct product of the structures and be the collection of finite subsets of For each let The family of all of these sets generates a proper filter, so there is an ultrafilter containing all sets of the form Now for any sentence in the set is in whenever then hence holds in the set of all with the property that holds in is a superset of hence also in Łoś's theorem now implies that holds in the ultraproduct So this ultraproduct satisfies all formulas in
Mathematics
Model theory
null
152214
https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel%20set%20theory
Zermelo–Fraenkel set theory
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Informally, Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set, so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension, thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes. There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets and there is a new set containing exactly and . Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe (also known as the cumulative hierarchy). The metamathematics of Zermelo–Fraenkel set theory has been extensively studied. Landmark results in this area established the logical independence of the axiom of choice from the remaining Zermelo-Fraenkel axioms and of the continuum hypothesis from ZFC. The consistency of a theory such as ZFC cannot be proved within the theory itself, as shown by Gödel's second incompleteness theorem. History The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. However, the discovery of paradoxes in naive set theory, such as Russell's paradox, led to the desire for a more rigorous form of set theory that was free of these paradoxes. In 1908, Ernst Zermelo proposed the first axiomatic set theory, Zermelo set theory. However, as first pointed out by Abraham Fraenkel in a 1921 letter to Zermelo, this theory was incapable of proving the existence of certain sets and cardinal numbers whose existence was taken for granted by most set theorists of the time, notably the cardinal number aleph-omega () and the set where is any infinite set and is the power set operation. Moreover, one of Zermelo's axioms invoked a concept, that of a "definite" property, whose operational meaning was not clear. In 1922, Fraenkel and Thoralf Skolem independently proposed operationalizing a "definite" property as one that could be formulated as a well-formed formula in a first-order logic whose atomic formulas were limited to set membership and identity. They also independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Appending this schema, as well as the axiom of regularity (first proposed by John von Neumann), to Zermelo set theory yields the theory denoted by ZF. Adding to ZF either the axiom of choice (AC) or a statement that is equivalent to it yields ZFC. Formal language Formally, ZFC is a one-sorted theory in first-order logic. The equality symbol can be treated as either a primitive logical symbol or a high-level abbreviation for having exactly the same elements. The former approach is the most common. The signature has a single predicate symbol, usually denoted , which is a predicate symbol of arity 2 (a binary relation symbol). This symbol symbolizes a set membership relation. For example, the formula means that is an element of the set (also read as is a member of ). There are different ways to formulate the formal language. Some authors may choose a different set of connectives or quantifiers. For example, the logical connective NAND alone can encode the other connectives, a property known as functional completeness. This section attempts to strike a balance between simplicity and intuitiveness. The language's alphabet consists of: A countably infinite amount of variables used for representing sets The logical connectives , , The quantifier symbols , The equality symbol The set membership symbol Brackets ( ) With this alphabet, the recursive rules for forming well-formed formulae (wff) are as follows: Let and be metavariables for any variables. These are the two ways to build atomic formulae (the simplest wffs): Let and be metavariables for any wff, and be a metavariable for any variable. These are valid wff constructions: A well-formed formula can be thought as a syntax tree. The leaf nodes are always atomic formulae. Nodes and have exactly two child nodes, while nodes , and have exactly one. There are countably infinitely many wffs, however, each wff has a finite number of nodes. Axioms There are many equivalent formulations of the ZFC axioms. The following particular axiom set is from . The axioms in order below are expressed in a mixture of first order logic and high-level abbreviations. Axioms 1–8 form ZF, while the axiom 9 turns ZF into ZFC. Following , we use the equivalent well-ordering theorem in place of the axiom of choice for axiom 9. All formulations of ZFC imply that at least one set exists. Kunen includes an axiom that directly asserts the existence of a set, although he notes that he does so only "for emphasis". Its omission here can be justified in two ways. First, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty. Hence, it is a logical theorem of first-order logic that something exists — usually expressed as the assertion that something is identical to itself, . Consequently, it is a theorem of every first-order theory that something exists. However, as noted above, because in the intended semantics of ZFC, there are only sets, the interpretation of this logical theorem in the context of ZFC is that some set exists. Hence, there is no need for a separate axiom asserting that a set exists. Second, however, even if ZFC is formulated in so-called free logic, in which it is not provable from logic alone that something exists, the axiom of infinity asserts that an infinite set exists. This implies that a set exists, and so, once again, it is superfluous to include an axiom asserting as much. Axiom of extensionality Two sets are equal (are the same set) if they have the same elements. The converse of this axiom follows from the substitution property of equality. ZFC is constructed in first-order logic. Some formulations of first-order logic include identity; others do not. If the variety of first-order logic in which you are constructing set theory does not include equality "", may be defined as an abbreviation for the following formula: In this case, the axiom of extensionality can be reformulated as which says that if and have the same elements, then they belong to the same sets. Axiom of regularity (also called the axiom of foundation) Every non-empty set contains a member such that and are disjoint sets. or in modern notation: This (along with the axioms of pairing and union) implies, for example, that no set is an element of itself and that every set has an ordinal rank. Axiom schema of specification (or of separation, or of restricted comprehension) Subsets are commonly constructed using set builder notation. For example, the even integers can be constructed as the subset of the integers satisfying the congruence modulo predicate : In general, the subset of a set obeying a formula with one free variable may be written as: The axiom schema of specification states that this subset always exists (it is an axiom schema because there is one axiom for each ). Formally, let be any formula in the language of ZFC with all free variables among ( is not free in ). Then: Note that the axiom schema of specification can only construct subsets and does not allow the construction of entities of the more general form: This restriction is necessary to avoid Russell's paradox (let then ) and its variants that accompany naive set theory with unrestricted comprehension (since under this restriction only refers to sets within that don't belong to themselves, and has not been established, even though is the case, so stands in a separate position from which it can't refer to or comprehend itself; therefore, in a certain sense, this axiom schema is saying that in order to build a on the basis of a formula , we need to previously restrict the sets will regard within a set that leaves outside so can't refer to itself; or, in other words, sets shouldn't refer to themselves). In some other axiomatizations of ZF, this axiom is redundant in that it follows from the axiom schema of replacement and the axiom of the empty set. On the other hand, the axiom schema of specification can be used to prove the existence of the empty set, denoted , once at least one set is known to exist. One way to do this is to use a property which no set has. For example, if is any existing set, the empty set can be constructed as Thus, the axiom of the empty set is implied by the nine axioms presented here. The axiom of extensionality implies the empty set is unique (does not depend on ). It is common to make a definitional extension that adds the symbol "" to the language of ZFC. Axiom of pairing If and are sets, then there exists a set which contains and as elements, for example if x = {1,2} and y = {2,3} then z will be {{1,2},{2,3}} The axiom schema of specification must be used to reduce this to a set with exactly these two elements. The axiom of pairing is part of Z, but is redundant in ZF because it follows from the axiom schema of replacement if we are given a set with at least two elements. The existence of a set with at least two elements is assured by either the axiom of infinity, or by the and the axiom of the power set applied twice to any set. Axiom of union The union over the elements of a set exists. For example, the union over the elements of the set is The axiom of union states that for any set of sets , there is a set containing every element that is a member of some member of : Although this formula doesn't directly assert the existence of , the set can be constructed from in the above using the axiom schema of specification: Axiom schema of replacement The axiom schema of replacement asserts that the image of a set under any definable function will also fall inside a set. Formally, let be any formula in the language of ZFC whose free variables are among so that in particular is not free in . Then: (The unique existential quantifier denotes the existence of exactly one element such that it follows a given statement.) In other words, if the relation represents a definable function , represents its domain, and is a set for every then the range of is a subset of some set . The form stated here, in which may be larger than strictly necessary, is sometimes called the axiom schema of collection. Axiom of infinity Let abbreviate where is some set. (We can see that is a valid set by applying the axiom of pairing with so that the set is ). Then there exists a set such that the empty set , defined axiomatically, is a member of and, whenever a set is a member of then is also a member of . or in modern notation: More colloquially, there exists a set having infinitely many members. (It must be established, however, that these members are all different because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) The minimal set satisfying the axiom of infinity is the von Neumann ordinal which can also be thought of as the set of natural numbers Axiom of power set By definition, a set is a subset of a set if and only if every element of is also an element of : The Axiom of power set states that for any set , there is a set that contains every subset of : The axiom schema of specification is then used to define the power set as the subset of such a containing the subsets of exactly: Axioms 1–8 define ZF. Alternative forms of these axioms are often encountered, some of which are listed in . Some ZF axiomatizations include an axiom asserting that the empty set exists. The axioms of pairing, union, replacement, and power set are often stated so that the members of the set whose existence is being asserted are just those sets which the axiom asserts must contain. The following axiom is added to turn ZF into ZFC: Axiom of well-ordering (choice) The last axiom, commonly known as the axiom of choice, is presented here as a property about well-orders, as in . For any set , there exists a binary relation which well-orders . This means is a linear order on such that every nonempty subset of has a least element under the order . Given axioms 1 – 8, many statements are equivalent to axiom 9. The most common of these goes as follows. Let be a set whose members are all nonempty. Then there exists a function from to the union of the members of , called a "choice function", such that for all one has . A third version of the axiom, also equivalent, is Zorn's lemma. Since the existence of a choice function when is a finite set is easily proved from axioms 1–8, AC only matters for certain infinite sets. AC is characterized as nonconstructive because it asserts the existence of a choice function but says nothing about how this choice function is to be "constructed". Motivation via the cumulative hierarchy One motivation for the ZFC axioms is the cumulative hierarchy of sets introduced by John von Neumann. In this viewpoint, the universe of set theory is built up in stages, with one stage for each ordinal number. At stage 0, there are no sets yet. At each following stage, a set is added to the universe if all of its elements have been added at previous stages. Thus the empty set is added at stage 1, and the set containing the empty set is added at stage 2. The collection of all sets that are obtained in this way, over all the stages, is known as V. The sets in V can be arranged into a hierarchy by assigning to each set the first stage at which that set was added to V. It is provable that a set is in V if and only if the set is pure and well-founded. And V satisfies all the axioms of ZFC if the class of ordinals has appropriate reflection properties. For example, suppose that a set x is added at stage α, which means that every element of x was added at a stage earlier than α. Then, every subset of x is also added at (or before) stage α, because all elements of any subset of x were also added before stage α. This means that any subset of x which the axiom of separation can construct is added at (or before) stage α, and that the powerset of x will be added at the next stage after α. The picture of the universe of sets stratified into the cumulative hierarchy is characteristic of ZFC and related axiomatic set theories such as Von Neumann–Bernays–Gödel set theory (often called NBG) and Morse–Kelley set theory. The cumulative hierarchy is not compatible with other set theories such as New Foundations. It is possible to change the definition of V so that at each stage, instead of adding all the subsets of the union of the previous stages, subsets are only added if they are definable in a certain sense. This results in a more "narrow" hierarchy, which gives the constructible universe L, which also satisfies all the axioms of ZFC, including the axiom of choice. It is independent from the ZFC axioms whether V = L. Although the structure of L is more regular and well behaved than that of V, few mathematicians argue that V = L should be added to ZFC as an additional "axiom of constructibility". Metamathematics Virtual classes Proper classes (collections of mathematical objects defined by a property shared by their members which are too big to be sets) can only be treated indirectly in ZF (and thus ZFC). An alternative to proper classes while staying within ZF and ZFC is the virtual class notational construct introduced by , where the entire construct y ∈ { x | Fx } is simply defined as Fy. This provides a simple notation for classes that can contain sets but need not themselves be sets, while not committing to the ontology of classes (because the notation can be syntactically converted to one that only uses sets). Quine's approach built on the earlier approach of . Virtual classes are also used in , , and in the Metamath implementation of ZFC. Finite axiomatization The axiom schemata of replacement and separation each contain infinitely many instances. included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other. Consistency Gödel's second incompleteness theorem says that a recursively axiomatizable system that can interpret Robinson arithmetic can prove its own consistency only if it is inconsistent. Moreover, Robinson arithmetic can be interpreted in general set theory, a small fragment of ZFC. Hence the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). Thus, to the extent that ZFC is identified with ordinary mathematics, the consistency of ZFC cannot be demonstrated in ordinary mathematics. The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal, which is unprovable in ZFC if ZFC is consistent. Nevertheless, it is deemed unlikely that ZFC harbors an unsuspected contradiction; it is widely believed that if ZFC were inconsistent, that fact would have been uncovered by now. This much is certain — ZFC is immune to the classic paradoxes of naive set theory: Russell's paradox, the Burali-Forti paradox, and Cantor's paradox. studied a subtheory of ZFC consisting of the axioms of extensionality, union, powerset, replacement, and choice. Using models, they proved this subtheory consistent, and proved that each of the axioms of extensionality, replacement, and power set is independent of the four remaining axioms of this subtheory. If this subtheory is augmented with the axiom of infinity, each of the axioms of union, choice, and infinity is independent of the five remaining axioms. Because there are non-well-founded models that satisfy each axiom of ZFC except the axiom of regularity, that axiom is independent of the other ZFC axioms. If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski's axiom. Assuming that axiom turns the axioms of infinity, power set, and choice (7 – 9 above) into theorems. Independence Many important statements are independent of ZFC. The independence is usually proved by forcing, whereby it is shown that every countable transitive model of ZFC (sometimes augmented with large cardinal axioms) can be expanded to satisfy the statement in question. A different expansion is then shown to satisfy the negation of the statement. An independence proof by forcing automatically proves independence from arithmetical statements, other concrete statements, and large cardinal axioms. Some statements independent of ZFC can be proven to hold in particular inner models, such as in the constructible universe. However, some statements that are true about constructible sets are not consistent with hypothesized large cardinal axioms. Forcing proves that the following statements are independent of ZFC: Axiom of constructibility (V=L) (which is also not a ZFC axiom) Continuum hypothesis Diamond principle Martin's axiom (which is not a ZFC axiom) Suslin hypothesis Remarks: The consistency of V=L is provable by inner models but not forcing: every model of ZF can be trimmed to become a model of ZFC + V=L. The diamond principle implies the continuum hypothesis and the negation of the Suslin hypothesis. Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis. The constructible universe satisfies the generalized continuum hypothesis, the diamond principle, Martin's axiom and the Kurepa hypothesis. The failure of the Kurepa hypothesis is equiconsistent with the existence of a strongly inaccessible cardinal. A variation on the method of forcing can also be used to demonstrate the consistency and unprovability of the axiom of choice, i.e., that the axiom of choice is independent of ZF. The consistency of choice can be (relatively) easily verified by proving that the inner model L satisfies choice. (Thus every model of ZF contains a submodel of ZFC, so that Con(ZF) implies Con(ZFC).) Since forcing preserves choice, we cannot directly produce a model contradicting choice from a model satisfying choice. However, we can use forcing to create a model which contains a suitable submodel, namely one satisfying ZF but not C. Another method of proving independence results, one owing nothing to forcing, is based on Gödel's second incompleteness theorem. This approach employs the statement whose independence is being examined, to prove the existence of a set model of ZFC, in which case Con(ZFC) is true. Since ZFC satisfies the conditions of Gödel's second theorem, the consistency of ZFC is unprovable in ZFC (provided that ZFC is, in fact, consistent). Hence no statement allowing such a proof can be proved in ZFC. This method can prove that the existence of large cardinals is not provable in ZFC, but cannot prove that assuming such cardinals, given ZFC, is free of contradiction. Proposed additions The project to unify set theorists behind additional axioms to resolve the continuum hypothesis or other meta-mathematical ambiguities is sometimes known as "Gödel's program". Mathematicians currently debate which axioms are the most plausible or "self-evident", which axioms are the most useful in various domains, and about to what degree usefulness should be traded off with plausibility; some "multiverse" set theorists argue that usefulness should be the sole ultimate criterion in which axioms to customarily adopt. One school of thought leans on expanding the "iterative" concept of a set to produce a set-theoretic universe with an interesting and complex but reasonably tractable structure by adopting forcing axioms; another school advocates for a tidier, less cluttered universe, perhaps focused on a "core" inner model. Criticisms ZFC has been criticized both for being excessively strong and for being excessively weak, as well as for its failure to capture objects such as proper classes and the universal set. Many mathematical theorems can be proven in much weaker systems than ZFC, such as Peano arithmetic and second-order arithmetic (as explored by the program of reverse mathematics). Saunders Mac Lane and Solomon Feferman have both made this point. Some of "mainstream mathematics" (mathematics not directly connected with axiomatic set theory) is beyond Peano arithmetic and second-order arithmetic, but still, all such mathematics can be carried out in ZC (Zermelo set theory with choice), another theory weaker than ZFC. Much of the power of ZFC, including the axiom of regularity and the axiom schema of replacement, is included primarily to facilitate the study of the set theory itself. On the other hand, among axiomatic set theories, ZFC is comparatively weak. Unlike New Foundations, ZFC does not admit the existence of a universal set. Hence the universe of sets under ZFC is not closed under the elementary operations of the algebra of sets. Unlike von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory (MK), ZFC does not admit the existence of proper classes. A further comparative weakness of ZFC is that the axiom of choice included in ZFC is weaker than the axiom of global choice included in NBG and MK. There are numerous mathematical statements independent of ZFC. These include the continuum hypothesis, the Whitehead problem, and the normal Moore space conjecture. Some of these conjectures are provable with the addition of axioms such as Martin's axiom or large cardinal axioms to ZFC. Some others are decided in ZF+AD where AD is the axiom of determinacy, a strong supposition incompatible with choice. One attraction of large cardinal axioms is that they enable many results from ZF+AD to be established in ZFC adjoined by some large cardinal axiom. The Mizar system and metamath have adopted Tarski–Grothendieck set theory, an extension of ZFC, so that proofs involving Grothendieck universes (encountered in category theory and algebraic geometry) can be formalized.
Mathematics
Axiomatic systems
null
152288
https://en.wikipedia.org/wiki/Persimmon
Persimmon
The persimmon () is the edible fruit of a number of species of trees in the genus Diospyros. The most widely cultivated of these is the kaki persimmon, Diospyros kaki Diospyros is in the family Ebenaceae, and a number of non-persimmon species of the genus are grown for ebony timber. In 2022, China produced 77% of the world total of persimmons. Description Like the tomato, the persimmon is not a berry in the general culinary sense, but its morphology as a single fleshy fruit derived from the ovary of a single flower means it is a berry in the botanical sense. The tree Diospyros kaki is the most widely cultivated species of persimmon. Typically the tree reaches in height and is round-topped. It usually stands erect, but sometimes can be crooked or have a willowy appearance. The leaves are long, and are oblong in shape with brown-hairy petioles in length. They are leathery and glossy on the upper surface, brown and silky underneath. The leaves are deciduous and bluish-green in color. In autumn, they turn to yellow, orange, or red. Persimmon trees are typically dioecious, meaning male and female flowers are produced on separate trees. Some trees have both male and female flowers and in rare cases may bear a perfect flower, which contains both male and female reproductive organs in one flower. Male flowers are pink and appear in groups of three. They have a four-parted calyx, a corolla, and 24 stamens in two rows. Female flowers are creamy-white and appear singly. They have a large calyx, a four-parted, yellow corolla, eight undeveloped stamens, and a rounded ovary bearing the style and stigma. 'Perfect' flowers are a cross between the two. Persimmon fruit matures late in the fall and can stay on the tree until winter. In color, the ripe fruit of the cultivated strains range from glossy light yellow-orange to dark red-orange depending on the species and variety. They similarly vary in size from in diameter, and in shape the varieties may be spherical, acorn-, or pumpkin-shaped. The flesh is astringent until fully ripe and is yellow, orange, or dark-brown in color. The calyx generally remains attached to the fruit after harvesting, but becomes easy to remove once the fruit is ripe. The ripe fruit is high in sucrose, mainly in the form of fructose and glucose content, and is sweet in taste. Chemistry Persimmon fruits contain phytochemicals, such as catechin, gallocatechin and betulinic acid. Taxonomy Selected species While many species of Diospyros bear fruit inedible to humans or only occasionally gathered, the following are grown for their edible fruit: Diospyros kaki (Oriental persimmon) Oriental persimmon, Chinese persimmon or Japanese persimmon (Diospyros kaki) is the most commercially important persimmon. It is native to China, Northeast India and northern Indochina. It was first cultivated in China more than 2,000 years ago, and introduced to Japan in the 7th century and to Korea in the 14th century. China, Japan and South Korea are also the top producers of persimmon. It is known as shi (柿) in Chinese, kaki (柿) in Japanese and gam (감) in Korean and also known as Korean mango. It is known as haluwabed (हलुवाबेद) in Nepal and it is used for various culinary purposes and eaten as a seasonal fruit. In Nepal, it is one of the most popular fruits and has been consumed for a very long time. It was introduced to California and southern Europe in the 1800s and to Brazil in the 1890s, in the State of São Paulo, afterwards spreading across Brazil with Japanese immigrants; the State of São Paulo is still the greatest producer within Brazil, with an area of dedicated to persimmon culture in 2003; It is deciduous, with broad, stiff leaves. Its fruits are sweet and slightly tangy with a soft to occasionally fibrous texture. Varieties Numerous cultivars have been selected. Some varieties are edible in the crisp, firm state but it has its best flavor when allowed to rest and soften slightly after harvest. The Japanese cultivar 'Hachiya' is widely grown. The fruit has a high tannin content, which makes the unripe fruit astringent and bitter. The tannin levels are reduced as the fruit matures. Persimmons like 'Hachiya' must be completely ripened before consumption. When ripe, this fruit consists of thick, pulpy jelly encased in a waxy thin-skinned shell. Commercially and in general, there are two types of persimmon fruit: astringent and non-astringent. The heart-shaped Hachiya is the most common variety of astringent persimmon. Astringent persimmons contain very high levels of soluble tannins and are unpalatable if eaten before completely softened. The astringency of tannins is removed in various ways. Examples include ripening by exposure to light for several days and wrapping the fruit in paper (probably because this increases the ethylene concentration of the surrounding air). Ethylene ripening can be increased in reliability and evenness, and the process can be greatly accelerated by adding ethylene gas to the atmosphere in which the fruit is stored. For domestic purposes, the most convenient and effective process is to store the ripening persimmons in a clean, dry container together with other varieties of fruit that give off particularly large quantities of ethylene while they are ripening; apples and related fruits such as pears are effective, as well as bananas and several others. Other chemicals are used commercially in artificially ripening persimmons or delaying their ripening. Examples include alcohol and carbon dioxide, which change tannin into the insoluble form. Such bletting processes sometimes are jump-started by exposing the fruit to cold or frost. The resultant cell damage stimulates the release of ethylene, which promotes cellular wall breakdown. Astringent varieties of persimmons also can be prepared for commercial purposes by drying. Tanenashi fruit will occasionally contain a seed or two, which can be planted and will yield a larger, more vertical tree than when merely grafted onto the D. virginiana rootstock most commonly used in the U.S. Such seedling trees may produce fruit that bears more seeds, usually six to eight per fruit, and the fruit itself may vary slightly from the parent tree. Seedlings are said to be more susceptible to root nematodes. The non-astringent persimmon is squat like a tomato and is most commonly sold as fuyu. Non-astringent persimmons are not actually free of tannins as the term suggests but rather are far less astringent before ripening and lose more of their tannic quality sooner. Non-astringent persimmons may be consumed when still very firm and remain edible when very soft. There is a third type, less commonly available, the pollination-variant non-astringent persimmons. When fully pollinated, the flesh of these fruit is brown inside—known as goma in Japan—and the fruit can be eaten when firm. These varieties are highly sought after. Tsurunoko, sold as "chocolate persimmon" for its dark brown flesh, Maru, sold as "cinnamon persimmon" for its spicy flavor, and Hyakume, sold as "brown sugar", are the three best known. Diospyros lotus (date-plum) Date-plum (Diospyros lotus), also known as lotus persimmon, is native to temperate Asia and southeast Europe. Its English name probably derives from Persian Khormaloo خرمالو literally "date-plum", referring to the taste of this fruit, which is reminiscent of both plums and dates. Diospyros decandra Diospyros decandra is native to Mainland Southeast Asia and its fruit peel is golden yellow. Diospyros virginiana (American persimmon) American persimmon (Diospyros virginiana) is native to the eastern United States. Harvested in the fall or after the first frost, its fruit is eaten fresh, in baked goods, in steamed puddings, and to make a mildly alcoholic beverage called persimmon beer. Varieties Prok Killen Claypool I-115 Dollywood 100-42 100-43 100-45 Early Golden John Rick C-100 JF-I Diospyros blancoi (velvet persimmon) The Mabolo or velvet-apple (Diospyros blancoi; syn. Diospyros discolor) is native to Taiwan, the Philippines and Borneo. Diospyros texana (Texas persimmon) Texas persimmon (Diospyros texana) is native to central and west Texas and southwest Oklahoma in the United States, and eastern Chihuahua, Coahuila, Nuevo León, and Tamaulipas in northeastern Mexico. The fruit of D. texana are black, subglobose berries with a diameter of that ripen in August. The fleshy berries become edible when they turn dark purple or black, at which point they are sweet and can be eaten from the hand or made into pudding or custard. Etymology The word persimmon is derived from putchamin, pasiminan, pechimin or pessamin, from Powhatan, an Algonquian language of the southern and eastern United States, meaning "a dry fruit". Other sources have suggested that the word "persimmon" comes from a Persian word meaning date-plum. It was first used in English in the early 17th century. Production In 2022, world production of persimmons was 4.44 million tonnes, led by China with 77% of the total (table). In China, the Taiqiu persimmon variety yields approximately 30 tonnes of fruit per year at full production. Australia The persimmon was introduced to Australia by Chinese immigrants in the 1850s. Only astringent varieties were cultivated until the introduction of non-astringent varieties from Japan in the 1970s. In 2022 the vast majority of persimmons sold domestically in Australia were non-astringent varieties. Azerbaijan Persimmons are one of Azerbaijan's most important non-petroleum exports. The main export markets are Russia, Ukraine, Belarus, Iran, Kazakhstan and the United Arab Emirates. India Persimmons have various local names across India, including japani phal or amar phal in Uttar Pradesh, amlok in Assam, lukum in Manipur, and Seemai Panichai in Tamilnadu. They are grown in Jammu & Kashmir, Himachal Pradesh, Tamil Nadu, Uttarakhand, Sikkim, Darjeeling Region of West Bengal & Arunachal Pradesh. Israel The primary variety produced in Israel is the Sharon fruit. Israel produces of Sharon fruit a year. "Sharon fruit" (named after the Sharon plain in Israel) is the marketing name for the Israeli-bred cultivar 'Triumph'. As with most commercial pollination-variant-astringent persimmons, the fruit are ripened off the tree by exposing them to carbon dioxide. The "sharon fruit" has no core, is seedless and particularly sweet, and can be eaten whole. Spain The primary variety produced in Spain is the Rojo Brillante. Spain produces 400,000 tons of Rojo Brillante a year. In the Valencia region of Spain, there is a production area of kaki called the "Ribera del Xùquer" which has a protected label and where only persimmons of the variety "Rojo Brillante" or derived mutations are cultivated. The largest part of these astringent type persimmons are CO2 treated to remove astringency and marketed as "Persimon" with one "m", which is a registered trademark. United States California produces of Fuyu a year. Most persimmons produced in California are seedless. California and Florida account for most commercial production. The first commercial orchards in Florida were planted in the 1870s and production peaked in the 1990s before declining. Most persimmon orchards in the US are small scale (70% less than and 90% less than ). Toxicity Unripe persimmons contain the soluble tannin shibuol, which, upon contact with a weak acid, polymerizes in the stomach and forms a gluey coagulum, a "foodball" or phytobezoar, that can affix with other stomach matter. These phytobezoars are often very hard and almost woody in consistency. More than 85% of phytobezoars are caused by ingestion of unripened persimmons. Persimmon bezoars (diospyrobezoars) often occur in epidemics in regions where the fruit is grown. Uses Persimmons are eaten fresh, dried, raw or cooked. When eaten fresh, they are usually eaten whole like an apple in bite-size slices and may be peeled, although the skin is edible. One way to consume ripe persimmons, which may have soft texture, is to remove the top leaf with a paring knife and scoop out the flesh with a spoon. Riper persimmons can also be eaten by removing the top leaf, breaking the fruit in half, and eating from the inside out. The flesh ranges from firm to mushy, and, when firm owing to being unripe, has an apple-like crunch. Some varieties are completely inedible until they are fully ripe, such as American persimmons (Diospyros virginiana) and Diospyros digyna. The leaves can be used to make a tisane and the seeds can be roasted. In Korea, dried persimmon fruits are used to make the traditional Korean spicy punch sujeonggwa, while the matured, fermented fruit is used to make a persimmon vinegar called gamsikcho. In Taiwan, fruits of astringent varieties are sealed in jars filled with limewater to get rid of bitterness. Slightly hardened in the process, they are sold under the name "crisp persimmon" (cuishi) or "water persimmon" (shuishizi). Preparation time is dependent upon temperature (5 to 7 days at . For centuries, Japanese have consumed persimmon leaf tea (Kaki-No-Ha Cha) made from the dried leaves of "kaki" persimmons (Diospyros kaki). In some areas of Manchuria and Korea, the dried leaves of the fruit are used for making tea. The Korean name for this tea is gamnip cha. In the US from Ohio southward, persimmons are harvested and used in a variety of dessert dishes, most notably pies. They can be used in cookies, cakes, puddings, salads, curries and as a topping for breakfast cereal. Persimmon pudding is a baked dessert made with fresh persimmons that has the consistency of pumpkin pie but resembles a brownie and is almost always topped with whipped cream. An annual persimmon festival, featuring a persimmon pudding contest, is held every September in Mitchell, Indiana. Persimmons may be stored at room temperature where they will continue to ripen. In northern China, unripe persimmons are frozen outdoors during winter to speed up the ripening process. Ripe persimmons can be refrigerated for as long as a couple of weeks, though extreme temperature changes may contribute to a mushy texture. It is recommended to store persimmons stem end down. Persimmons can also be fermented in the manner of black garlic. Dried In China, Korea, Japan and Vietnam, persimmons after harvesting are prepared using traditional hand-drying techniques outdoors for two to three weeks. The fruit is then further dried by exposure to heat over several days before being shipped to market, to be sold as dried fruit. In Japan, the dried persimmon fruit is called hoshigaki, in China shìbǐng (柿餠), in Korea gotgam or Geonsi (乾枾), and in Vietnam hồng khô (紅枯). It is eaten as a snack or dessert and used for other culinary purposes. Nutrition Persimmons have higher levels of dietary fiber and some dietary minerals than apples, but overall are not a significant source of micronutrients, except for manganese (17% of the Daily Value, DV) and provitamin A beta-carotene (10% DV, table for raw Japanese persimmons per 100-gram amount). Raw American persimmons are a rich source of vitamin C (80% DV per 100g) and iron (19% DV). Culture In Ozark folklore, the severity of the upcoming winter is said to be predictable by slicing a persimmon seed and seeing whether it is shaped like a knife, fork, or spoon within. According to the Missouri Department of Conservation, this is not a reliable method. In Korean folklore the dried persimmon (gotgam, Korean: 곶감) has a reputation for scaring away tigers. In Malaysia and Singapore, large persimmons are viewed as a status symbol. Diseases In 1999, the first report of leaf blight on sweet persimmon tree by fungal pathogen Pestalotiopsis theae in Spain was documented.
Biology and health sciences
Tropical and tropical-like fruit
Plants
152393
https://en.wikipedia.org/wiki/Bulbul
Bulbul
The bulbuls are members of a family, Pycnonotidae, of medium-sized passerine songbirds, which also includes greenbuls, brownbuls, leafloves, and bristlebills. The family is distributed across most of Africa and into the Middle East, tropical Asia to Indonesia, and north as far as Japan. A few insular species occur on the tropical islands of the Indian Ocean. There are 166 species in 32 genera. While different species are found in a wide range of habitats, the African species are predominantly found in rainforest, whereas Asian bulbuls are predominantly found in more open areas. Taxonomy The family Pycnonotidae was introduced by the English zoologist George Robert Gray in 1840 as a subfamily Pycnonotinae of the thrush family Turdidae. The Arabic word bulbul (بلبل) is sometimes used to refer to the "nightingale" as well as the bulbul, but the English word bulbul refers to the birds discussed in this article. A few species that were previously considered to be members of the Pycnonotidae have been moved to other families. Several Malagasy species that were formerly placed in the genus Phyllastrephus are now placed in the family Bernieridae. In addition, the genus Nicator containing three African species is now placed in a separate family Nicatoridae. A study published in 2007 by Ulf Johansson and colleagues using three nuclear markers found that the genus Andropadus was non-monophyletic. In the subsequent revision, species were moved to three resurrected genera: Arizelocichla, Stelgidillas and Eurillas. Only the sombre greenbul (Andropadus importunus), was retained in Andropadus. A study by Subir Shakya and Frederick Shelden published in 2017 found that species in the large genus  Pycnonotus formed several deeply divergent clades. The genus was split and six genera were resurrected to accommodate these clades. The family forms two main clades. One clade contains species that are only found in Africa; many of these have greenbul in the common name. The second clade contains mostly Asian species but includes a few species that are found in Africa. List of genera Currently, there are 167 recognized species in 32 genera: Genus Andropadus – sombre greenbul (formerly contained many species) Genus Stelgidillas – slender-billed greenbul (formerly in Andropadus) Genus Calyptocichla – golden greenbul Genus Neolestes – black-collared bulbul Genus Bleda – bristlebills (5 species) Genus Atimastillas – greenbuls (2 species) Genus Ixonotus – spotted greenbul Genus Thescelocichla – swamp palm bulbul Genus Chlorocichla – greenbuls (5 species) Genus Baeopogon – greenbuls (2 species) Genus Arizelocichla – greenbuls (11 species) (formerly in Andropadus) Genus Criniger – greenbuls (5 species) Genus Eurillas – greenbuls (5 species) (formerly in Andropadus) Genus Phyllastrephus – greenbuls and brownbuls (21 species) Genus Tricholestes – hairy-backed bulbul Genus Setornis – hook-billed bulbul Genus Alophoixus – 8 species (formerly in Criniger) Genus Alcurus – striated bulbul Genus Iole – 7 species Genus Hemixos – 4 species Genus Acritillas – yellow-browed bulbul Genus Ixos – 5 species Genus Hypsipetes – 26 species (includes 3 species formerly in Thapsinillas, one formerly in Cerasophila and one formerly in Microscelis) Genus Euptilotus – puff-backed bulbul (formerly in Pycnonotus) Genus Microtarsus – black-and-white bulbul (formerly in Pycnonotus) Genus Poliolophus – yellow-wattled bulbul (formerly in Pycnonotus) Genus Brachypodius – 4 species (formerly in Pycnonotus) Genus Ixodia – 3 species (formerly in Pycnonotus) Genus Rubigula – 5 species (formerly in Pycnonotus) Genus Nok – bare-faced bulbul (genus introduced in 2017) Genus Spizixos – finchbills (2 species) Genus Pycnonotus – 34 species (substantially reduced from earlier classification) Cladogram Description Bulbuls are short-necked slender passerines. The tails are long and the wings short and rounded. In almost all species the bill is slightly elongated and slightly hooked at the end. They vary in length from 13 cm and for the tiny greenbul to 29 cm and in the straw-headed bulbul. Overall the sexes are alike, although the females tend to be slightly smaller. In a few species the differences are so great that they have been described as functionally different species. The soft plumage of some species is colorful with yellow, red or orange vents, cheeks, throat or supercilia, but most are drab, with uniform olive-brown to black plumage. Species with dull coloured eyes often sport contrasting eyerings. Some have very distinct crests. Bulbuls are highly vocal, with the calls of most species being described as nasal or gravelly. One author described the song of the brown-eared bulbul as "one of the most unattractive noises made by any bird". Behaviour and ecology Breeding The bulbuls are generally monogamous. One unusual exception is the yellow-whiskered greenbul which at least over part of its range appears to be polygamous and engage in a lekking system. Some species also have alloparenting arrangements, where non-breeders, usually the young from earlier clutches, help raise the young of a dominant breeding pair. Up to five speckled eggs are laid in open tree nests and incubated by the female. Incubation usually lasts between 11 and 14 days, and chicks fledge after 12–16 days. Feeding Bulbuls eat a wide range of foods, ranging from fruit to seeds, nectar, small insects and other arthropods and even small vertebrates. The majority of species are frugivorous and supplement their diet with some insects, although there is a significant minority of specialists, particularly in Africa. Open country species in particular are generalists. Bulbuls in the genus Criniger and bristlebills in the genus Bleda will join mixed-species feeding flocks. Relationship to humans The red-whiskered bulbuls and red-vented bulbuls have been captured for the pet trade in great numbers and have been widely introduced to tropical and subtropical areas, for example, southern Florida, Fiji, Australia and Hawaii. Some species are regarded as crop pests, particularly in orchards. In general, bulbuls and greenbuls are resistant to human pressures on the environment and are tolerant of disturbed habitat. Around 13 species are considered threatened by human activities, mostly specialised forest species that are threatened by habitat loss.
Biology and health sciences
Passerida
null
152440
https://en.wikipedia.org/wiki/Stellar%20nucleosynthesis
Stellar nucleosynthesis
In astrophysics, stellar nucleosynthesis is the creation of chemical elements by nuclear fusion reactions within stars. Stellar nucleosynthesis has occurred since the original creation of hydrogen, helium and lithium during the Big Bang. As a predictive theory, it yields accurate estimates of the observed abundances of the elements. It explains why the observed abundances of elements change over time and why some elements and their isotopes are much more abundant than others. The theory was initially proposed by Fred Hoyle in 1946, who later refined it in 1954. Further advances were made, especially to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret and Geoffrey Burbidge, William Alfred Fowler and Fred Hoyle in their famous 1957 B2FH paper, which became one of the most heavily cited papers in astrophysics history. Stars evolve because of changes in their composition (the abundance of their constituent elements) over their lifespans, first by burning hydrogen (main sequence star), then helium (horizontal branch star), and progressively burning higher elements. However, this does not by itself significantly alter the abundances of elements in the universe as the elements are contained within the star. Later in its life, a low-mass star will slowly eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova. The term supernova nucleosynthesis is used to describe the creation of elements during the explosion of a massive star or white dwarf. The advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon, oxygen and silicon. However, most of the nucleosynthesis in the mass range (from silicon to nickel) is actually caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward. The shock front briefly raises temperatures by roughly 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis. A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was already inspired by the relative abundances of the chemical elements in the solar system. Those abundances, when plotted on a graph as a function of the atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions (see history of nucleosynthesis theory). This suggested a natural process that is not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. History In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F.W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars. This was a preliminary step toward the idea of stellar nucleosynthesis. In 1928 George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula yielding the probability for two contiguous nuclei to overcome the electrostatic Coulomb barrier between them and approach each other closely enough to undergo nuclear reaction due to the strong nuclear force which is effective only at very short distances. In the following decade the Gamow factor was used by Atkinson and Houtermans and later by Edward Teller and Gamow himself to derive the rate at which nuclear reactions would occur at the high temperatures believed to exist in stellar interiors. In 1939, in a Nobel lecture entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium. He defined two processes that he believed to be the sources of energy in stars. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun. The second process, the carbon–nitrogen–oxygen cycle, which was also considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars. These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however. That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble thermodynamically into iron. Hoyle followed that in 1954 with a paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. Hoyle's theory was extended to other processes, beginning with the publication of the 1957 review paper "Synthesis of the Elements in Stars" by Burbidge, Burbidge, Fowler and Hoyle, more commonly referred to as the B2FH paper. This review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements heavier than iron by neutron capture. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton. In 1957 Cameron presented his own independent approach to nucleosynthesis, informed by Hoyle's example, and introduced computers into time-dependent calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the s-process in 1961 and of the r-process in 1965, as well as of the burning of silicon into the abundant alpha-particle nuclei and iron-group elements in 1968, and discovered radiogenic chronologies for determining the age of the elements. Key reactions The most important reactions in stellar nucleosynthesis: Hydrogen fusion: Deuterium fusion The proton–proton chain The carbon–nitrogen–oxygen cycle Helium fusion: The triple-alpha process The alpha process Fusion of heavier elements: Lithium burning: a process found most commonly in brown dwarfs Carbon-burning process Neon-burning process Oxygen-burning process Silicon-burning process Production of elements heavier than iron: Neutron capture: The r-process The s-process Proton capture: The rp-process The p-process Photodisintegration Hydrogen fusion Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus) is the dominant process that generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant processes by which stellar hydrogen fusion occurs: proton–proton chain and the carbon–nitrogen–oxygen (CNO) cycle. Ninety percent of all stars, with the exception of white dwarfs, are fusing hydrogen by these two processes. In the cores of lower-mass main-sequence stars such as the Sun, the dominant energy production process is the proton–proton chain reaction. This creates a helium-4 nucleus through a sequence of reactions that begin with the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron and neutrino. In each complete fusion cycle, the proton–proton chain reaction releases about 26.2 MeV. Proton-proton chain with a dependence of approximately T^4, meaning the reaction cycle is highly sensitive to temperature; a 10% rise of temperature would increase energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, the energy flux toward the surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer, rather than by convective heat transfer. As a result, there is little mixing of fresh hydrogen into the core or fusion products outward. In higher-mass stars, the dominant energy production process is the CNO cycle, which is a catalytic cycle that uses nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the proton–proton chain. During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy production of this cycle, compared to the proton–proton chain reaction, is accounted for by the energy lost through neutrino emission. CNO cycle is highly sensitive to temperature, with rates proportional to T^{16-20}, a 10% rise of temperature would produce a 350% rise in energy production. About 90% of the CNO cycle energy generation occurs within the inner 15% of the star's mass, hence it is strongly concentrated at the core. This results in such an intense outward energy flux that convective energy transfer becomes more important than does radiative transfer. As a result, the core region becomes a convection zone, which stirs the hydrogen fusion region and keeps it well mixed with the surrounding proton-rich region. This core convection occurs in stars where the CNO cycle contributes more than 20% of the total energy. As the star ages and the core temperature increases, the region occupied by the convection zone slowly shrinks from 20% of the mass down to the inner 8% of the mass. The Sun produces on the order of 1% of its energy from the CNO cycle. The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency differences between the two reactions. The proton–proton chain reaction starts at temperatures about , making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher temperature of approximately , but thereafter it increases more rapidly in efficiency as the temperature rises, than does the proton–proton reaction. Above approximately , the CNO cycle becomes the dominant source of energy. This temperature is achieved in the cores of main-sequence stars with at least 1.3 times the mass of the Sun. The Sun itself has a core temperature of about . As a main-sequence star ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle. Helium fusion Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after accumulating sufficient helium in its core to ignite it. In stars around the mass of the Sun, this begins at the tip of the red giant branch with a helium flash from a degenerate helium core, and the star moves to the horizontal branch where it burns helium in its core. More massive stars ignite helium in their core without a flash and execute a blue loop before reaching the asymptotic giant branch. Such a star initially moves away from the AGB toward bluer colours, then loops back again to what is called the Hayashi track. An important consequence of blue loops is that they give rise to classical Cepheid variables, of central importance in determining distances in the Milky Way and to nearby galaxies. Despite the name, stars on a blue loop from the red giant branch are typically not blue in colour but are rather yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and oxygen. The most massive stars become supergiants when they leave the main sequence and quickly start helium fusion as they become red supergiants. After the helium is exhausted in the core of a star, helium fusion will continue in a shell around the carbon–oxygen core. In all cases, helium is fused to carbon via the triple-alpha process, i.e., three helium nuclei are transformed into carbon via 8Be. This can then form oxygen, neon, and heavier elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion pathways. Reaction rate The reaction rate density between species A and B, having number densities nA,B, is given by: where k is the reaction rate constant of each single elementary binary reaction composing the nuclear fusion process: here, σ(v) is the cross-section at relative velocity v, and averaging is performed over all velocities. Semi-classically, the cross section is proportional to , where is the de Broglie wavelength. Thus semi-classically the cross section is proportional to . However, since the reaction involves quantum tunneling, there is an exponential damping at low energies that depends on Gamow factor EG, giving an Arrhenius equation: where S(E) depends on the details of the nuclear interaction, and has the dimension of an energy multiplied for a cross section. One then integrates over all energies to get the total reaction rate, using the Maxwell–Boltzmann distribution and the relation: where is the reduced mass. Since this integration has an exponential damping at high energies of the form and at low energies from the Gamow factor, the integral almost vanished everywhere except around the peak, called Gamow peak, at E0, where: Thus: The exponent can then be approximated around E0 as: And the reaction rate is approximated as: Values of S(E0) are typically , but are damped by a huge factor when involving a beta decay, due to the relation between the intermediate bound state (e.g. diproton) half-life and the beta decay half-life, as in the proton–proton chain reaction. Note that typical core temperatures in main-sequence stars give kT of the order of keV. Thus, the limiting reaction in the CNO cycle, proton capture by , has S(E0) ~ S(0) = 3.5keV·b, while the limiting reaction in the proton–proton chain reaction, the creation of deuterium from two protons, has a much lower S(E0) ~ S(0) = 4×10−22keV·b. Incidentally, since the former reaction has a much higher Gamow factor, and due to the relative abundance of elements in typical stars, the two reaction rates are equal at a temperature value that is within the core temperature ranges of main-sequence stars.
Physical sciences
Stellar astronomy
Astronomy
152464
https://en.wikipedia.org/wiki/Nuclide
Nuclide
Nuclides (or nucleides, from nucleus, also known as nuclear species) are a class of atoms characterized by their number of protons, Z, their number of neutrons, N, and their nuclear energy state. The word nuclide was coined by the American nuclear physicist Truman P. Kohman in 1947. Kohman defined nuclide as a "species of atom characterized by the constitution of its nucleus" containing a certain number of neutrons and protons. The term thus originally focused on the nucleus. Nuclides vs isotopes A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical reactions is negligible for most elements. Even in the case of the very lightest elements, where the ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect, but it matters in some circumstances. For hydrogen, the lightest element, the isotope effect is large enough to affect biological systems strongly. In the case of helium, helium-4 obeys Bose–Einstein statistics, while helium-3 obeys Fermi–Dirac statistics. Since isotope is the older term, it is better known than nuclide, and is still occasionally used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine. Types of nuclides Although the words nuclide and isotope are often used interchangeably, being isotopes is actually only one relation between nuclides. The following table names some other relations. A set of nuclides with equal proton number (atomic number), i.e., of the same chemical element but different neutron numbers, are called isotopes of the element. Particular nuclides are still often loosely called "isotopes", but the term "nuclide" is the correct one in general (i.e., when Z is not fixed). In similar manner, a set of nuclides with equal mass number A, but different atomic number, are called isobars (isobar = equal in weight), and isotones are nuclides of equal neutron number but different proton numbers. Likewise, nuclides with the same neutron excess (N − Z) are called isodiaphers. The name isotone was derived from the name isotope to emphasize that in the first group of nuclides it is the number of neutrons (n) that is constant, whereas in the second the number of protons (p). See Isotope#Notation for an explanation of the notation used for different nuclide or isotope types. Nuclear isomers are members of a set of nuclides with equal proton number and equal mass number (thus making them by definition the same isotope), but different states of excitation. An example is the two states of the single isotope shown among the decay schemes. Each of these two states (technetium-99m and technetium-99) qualifies as a different nuclide, illustrating one way that nuclides may differ from isotopes (an isotope may consist of several different nuclides of different excitation states). The longest-lived non-ground state nuclear isomer is the nuclide tantalum-180m (), which has a half-life in excess of 1,000 trillion years. This nuclide occurs primordially, and has never been observed to decay to the ground state. (In contrast, the ground state nuclide tantalum-180 does not occur primordially, since it decays with a half life of only 8 hours to 180Hf (86%) or 180W (14%).) There are 251 nuclides in nature that have never been observed to decay. They occur among the 80 different elements that have one or more stable isotopes. See stable nuclide and primordial nuclide. Unstable nuclides are radioactive and are called radionuclides. Their decay products ('daughter' products) are called radiogenic nuclides. Origins of naturally occurring radionuclides Natural radionuclides may be conveniently subdivided into three types. First, those whose half-lives t1/2 are at least 2% as long as the age of the Earth (for practical purposes, these are difficult to detect with half-lives less than 10% of the age of the Earth) (). These are remnants of nucleosynthesis that occurred in stars before the formation of the Solar System. For example, the isotope (t1/2 = ) of uranium is still fairly abundant in nature, but the shorter-lived isotope (t1/2 = ) is 138 times rarer. About 34 of these nuclides have been discovered (see List of nuclides and Primordial nuclide for details). The second group of radionuclides that exist naturally consists of radiogenic nuclides such as (t1/2 = ), an isotope of radium, which are formed by radioactive decay. They occur in the decay chains of primordial isotopes of uranium or thorium. Some of these nuclides are very short-lived, such as isotopes of francium. There exist about 51 of these daughter nuclides that have half-lives too short to be primordial, and which exist in nature solely due to decay from longer lived radioactive primordial nuclides. The third group consists of nuclides that are continuously being made in another fashion that is not simple spontaneous radioactive decay (i.e., only one atom involved with no incoming particle) but instead involves a natural nuclear reaction. These occur when atoms react with natural neutrons (from cosmic rays, spontaneous fission, or other sources), or are bombarded directly with cosmic rays. The latter, if non-primordial, are called cosmogenic nuclides. Other types of natural nuclear reactions produce nuclides that are said to be nucleogenic nuclides. An example of nuclides made by nuclear reactions, are cosmogenic (radiocarbon) that is made by cosmic ray bombardment of other elements, and nucleogenic which is still being created by neutron bombardment of natural as a result of natural fission in uranium ores. Cosmogenic nuclides may be either stable or radioactive. If they are stable, their existence must be deduced against a background of stable nuclides, since every known stable nuclide is present on Earth primordially. Artificially produced nuclides Beyond the naturally occurring nuclides, more than 3000 radionuclides of varying half-lives have been artificially produced and characterized. The known nuclides are shown in Table of nuclides. A list of primordial nuclides is given sorted by element, at List of elements by stability of isotopes. List of nuclides is sorted by half-life, for the 905 nuclides with half-lives longer than one hour. Summary table for numbers of each class of nuclides This is a summary table for the 905 nuclides with half-lives longer than one hour, given in list of nuclides. Note that numbers are not exact, and may change slightly in the future, if some "stable" nuclides are observed to be radioactive with very long half-lives. Nuclear properties and stability Atomic nuclei other than hydrogen have protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph). For example, although the neutron–proton ratio of is 1:2, the neutron–proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons. Even and odd nucleon numbers The proton–neutron ratio is not the only factor affecting nuclear stability. It depends also on even or odd parity of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or more exotic means, such as spontaneous fission and cluster decay. The majority of stable nuclides are even-proton–even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and even-proton–odd-neutron nuclides. Odd-proton–odd-neutron nuclides (and nuclei) are the least common.
Physical sciences
Chemistry: General
null
152465
https://en.wikipedia.org/wiki/Optic%20nerve
Optic nerve
In neuroanatomy, the optic nerve, also known as the second cranial nerve, cranial nerve II, or simply CN II, is a paired cranial nerve that transmits visual information from the retina to the brain. In humans, the optic nerve is derived from optic stalks during the seventh week of development and is composed of retinal ganglion cell axons and glial cells; it extends from the optic disc to the optic chiasma and continues as the optic tract to the lateral geniculate nucleus, pretectal nuclei, and superior colliculus. Structure The optic nerve has been classified as the second of twelve paired cranial nerves, but it is technically a myelinated tract of the central nervous system, rather than a classical nerve of the peripheral nervous system because it is derived from an out-pouching of the diencephalon (optic stalks) during embryonic development. As a consequence, the fibers of the optic nerve are covered with myelin produced by oligodendrocytes, rather than Schwann cells of the peripheral nervous system, and are encased within the meninges. Peripheral neuropathies like Guillain–Barré syndrome do not affect the optic nerve. However, most typically, the optic nerve is grouped with the other eleven cranial nerves and is considered to be part of the peripheral nervous system. The optic nerve is ensheathed in all three meningeal layers (dura, arachnoid, and pia mater) rather than the epineurium, perineurium, and endoneurium found in peripheral nerves. Fiber tracts of the mammalian central nervous system have only limited regenerative capabilities compared to the peripheral nervous system. Therefore, in most mammals, optic nerve damage results in irreversible blindness. The fibers from the retina run along the optic nerve to nine primary visual nuclei in the brain, from which a major relay inputs into the primary visual cortex. The optic nerve is composed of retinal ganglion cell axons and glia. Each human optic nerve contains between 770,000 and 1.7 million nerve fibers, which are axons of the retinal ganglion cells of one retina. In the fovea, which has high acuity, these ganglion cells connect to as few as 5 photoreceptor cells; in other areas of the retina, they connect to thousands of photoreceptors. The optic nerve leaves the orbit (eye socket) via the optic canal, running postero-medially towards the optic chiasm, where there is a partial decussation (crossing) of fibers from the temporal visual fields (the nasal hemi-retina) of both eyes. The proportion of decussating fibers varies between species, and is correlated with the degree of binocular vision enjoyed by a species. Most of the axons of the optic nerve terminate in the lateral geniculate nucleus from where information is relayed to the visual cortex, while other axons terminate in the pretectal area and are involved in reflexive eye movements. Other axons terminate in the suprachiasmatic nucleus and are involved in regulating the sleep-wake cycle. Its diameter increases from about 1.6 mm within the eye to 3.5 mm in the orbit to 4.5 mm within the cranial space. The optic nerve component lengths are 1 mm in the globe, 24 mm in the orbit, 9 mm in the optic canal, and 16 mm in the cranial space before joining the optic chiasm. There, partial decussation occurs, and about 53% of the fibers cross to form the optic tracts. Most of these fibers terminate in the lateral geniculate body. Based on this anatomy, the optic nerve may be divided into four parts as indicated in the image at the top of this section (this view is from above as if you were looking into the orbit after the top of the skull had been removed): 1. the optic head (which is where it begins in the eyeball (globe) with fibers from the retina); 2. orbital part (which is the part within the orbit); 3. intracanicular part (which is the part within a bony canal known as the optic canal); and, 4. cranial part (the part within the cranial cavity, which ends at the optic chiasm). From the lateral geniculate body, fibers of the optic radiation pass to the visual cortex in the occipital lobe of the brain. In more specific terms, fibers carrying information from the contralateral superior visual field traverse Meyer's loop to terminate in the lingual gyrus below the calcarine fissure in the occipital lobe, and fibers carrying information from the contralateral inferior visual field terminate more superiorly, to the cuneus. Function The optic nerve transmits all visual information including brightness perception, color perception and contrast (visual acuity). It also conducts the visual impulses that are responsible for two important neurological reflexes: the light reflex and the accommodation reflex. The light reflex refers to the constriction of both pupils that occurs when light is shone into either eye. The accommodation reflex refers to the swelling of the lens of the eye that occurs when one looks at a near object (for example: when reading, the lens adjusts to near vision). The eye's blind spot is a result of the absence of photoreceptors in the area of the retina where the optic nerve leaves the eye. Clinical significance Disease Damage to the optic nerve typically causes permanent and potentially severe loss of vision, as well as an abnormal pupillary reflex, which is important for the diagnosis of nerve damage. The type of visual field loss will depend on which portions of the optic nerve were damaged. In general, the location of the damage in relation to the optic chiasm (see diagram above) will affect the areas of vision loss. Damage to the optic nerve that is anterior, or in front of the optic chiasm (toward the face) causes loss of vision in the eye on the same side as the damage. Damage at the optic chiasm itself typically causes loss of vision laterally in both visual fields or bitemporal hemianopsia (see image to the right). Such damage may occur with large pituitary tumors, such as pituitary adenoma. Finally, damage to the optic tract, which is posterior to, or behind the chiasm, causes loss of the entire visual field from the side opposite the damage, e.g. if the left optic tract were cut, there would be a loss of vision from the entire right visual field. Injury to the optic nerve can be the result of congenital or inheritable problems like Leber's hereditary optic neuropathy, glaucoma, trauma, toxicity, inflammation, ischemia, infection (very rarely), or compression from tumors or aneurysms. By far, the three most common injuries to the optic nerve are from glaucoma; optic neuritis, especially in those younger than 50 years of age; and anterior ischemic optic neuropathy, usually in those older than 50. Glaucoma is a group of diseases involving loss of retinal ganglion cells causing optic neuropathy in a pattern of peripheral vision loss, initially sparing central vision. Glaucoma is frequently associated with increased intraocular pressure that damages the optic nerve as it exits the eyeball. The trabecular meshwork assists the drainage of aqueous humor fluid. The presence of excess aqueous humor, increases IOP, yielding the diagnosis and symptoms of glaucoma. Optic neuritis is inflammation of the optic nerve. It is associated with a number of diseases, the most notable one being multiple sclerosis. The patient will likely experience varying vision loss and eye pain. The condition tends to be episodic. Anterior ischemic optic neuropathy is commonly known as a "stroke of the optic nerve" and affects the optic nerve head (where the nerve exits the eyeball). There is usually a sudden loss of blood supply and nutrients to the optic nerve head. Vision loss is typically sudden and most commonly occurs upon waking up in the morning. This condition is most common in diabetic patients 40–70 years old. Other optic nerve problems are less common. Optic nerve hypoplasia is the underdevelopment of the optic nerve resulting in little to no vision in the affected eye. Tumors, especially those of the pituitary gland, can put pressure on the optic nerve causing various forms of visual loss. Similarly, cerebral aneurysms, a swelling of blood vessel(s), can also affect the nerve. Trauma can cause serious injury to the nerve. Direct optic nerve injury can occur from a penetrating injury to the orbit, but the nerve can also be injured by indirect trauma in which severe head impact or movement stretches or even tears the nerve. Ophthalmologists and optometrists can detect and diagnose some optic nerve diseases but neuro-ophthalmologists are often best suited to diagnose and treat diseases of the optic nerve. The International Foundation for Optic Nerve Diseases (IFOND) sponsors research and provides information on a variety of optic nerve disorders. Additional images
Biology and health sciences
Visual system
Biology
152509
https://en.wikipedia.org/wiki/Metastasis
Metastasis
Metastasis is a pathogenic agent's spreading from an initial or primary site to a different or secondary site within the host's body; the term is typically used when referring to metastasis by a cancerous tumor. The newly pathological sites, then, are metastases (mets). It is generally distinguished from cancer invasion, which is the direct extension and penetration by cancer cells into neighboring tissues. Cancer occurs after cells are genetically altered to proliferate rapidly and indefinitely. This uncontrolled proliferation by mitosis produces a primary heterogeneic tumour. The cells which constitute the tumor eventually undergo metaplasia, followed by dysplasia then anaplasia, resulting in a malignant phenotype. This malignancy allows for invasion into the circulation, followed by invasion to a second site for tumorigenesis. Some cancer cells, known as circulating tumor cells (CTCs), are able to penetrate the walls of lymphatic or blood vessels, and circulate through the bloodstream to other sites and tissues in the body. This process, known respectively as lymphatic or hematogenous spread, allows not only single cells but also groups of cells, or CTC clusters, to travel. Evidence suggests that CTC clusters may retain their multicellular configuration throughout metastasis, enhancing their ability to establish secondary tumors. This perspective aligns with the cancer exodus hypothesis, which posits that maintaining this cluster structure contributes to a higher metastatic potential. Metastasis is one of the hallmarks of cancer, distinguishing it from benign tumors. Most cancers can metastasize, although in varying degrees. Basal cell carcinoma for example rarely metastasizes. When tumor cells metastasize, the new tumor is called a secondary or metastatic tumor, and its cells are similar to those in the original or primary tumor. This means that if breast cancer metastasizes to the lungs, the secondary tumor is made up of abnormal breast cells, not of abnormal lung cells. The tumor in the lung is then called metastatic breast cancer, not lung cancer. Metastasis is a key element in cancer staging systems such as the TNM staging system, where it represents the "M". In overall stage grouping, metastasis places a cancer in Stage IV. The possibilities of curative treatment are greatly reduced, or often entirely removed when a cancer has metastasized. Signs and symptoms Initially, nearby lymph nodes are struck early. The lungs, liver, brain, and bones are the most common metastasis locations from solid tumors. In lymph node metastasis, a common symptom is lymphadenopathy Lung metastasis: cough, hemoptysis and dyspnea (shortness of breath) Liver metastasis: hepatomegaly (enlarged liver), nausea and jaundice Bone metastasis: bone pain, fracture of affected bones Brain metastasis: neurological symptoms such as headaches, seizures, and vertigo Although advanced cancer may cause pain, it is often not the first symptom. Some patients, however, do not show any symptoms. When the organ gets a metastatic disease it begins to shrink until its lymph nodes burst, or undergo lysis. Pathophysiology Metastatic tumors are very common in the late stages of cancer. The spread of metastasis may occur via the blood or the lymphatics or through both routes. The most common sites of metastases are the lungs, liver, brain, and the bones Currently, three main theories have been proposed to explain the metastatic pathway of cancer: the epithelial-mesenchymal transition (EMT) and mesenchymal-epithelial transition (MET) hypothesis (1), the cancer stem cell hypothesis (2), and the macrophage–cancer cell fusion hybrid hypothesis (3). Some new hypotheses were suggested as well, i.e., under the effect of particular biochemical and/or physical stressors, cancer cells can undergo nuclear expulsion with subsequent macrophage engulfment and fusion, with the formation of cancer fusion cells (CFCs). Understanding the enigma of cancer cell spread to distant sites, which accounts for over 90% of cancer-related deaths, necessitates comprehensive investigation. Key outstanding questions revolve around the survival and migration of cancer cells, such as the nucleus, as they face challenges in passage through capillary valves and hydrodynamic shear forces in the circulation system, making CTCs an unlikely source of metastasis. Moreover, understanding how cancer cells adapt to the metastatic niche and remain dormant (tumor dormancy) for extended periods presents difficult questions that require further investigation. Factors involved Metastasis involves a complex series of steps in which cancer cells leave the original tumor site and migrate to other parts of the body via the bloodstream, via the lymphatic system, or by direct extension. To do so, malignant cells break away from the primary tumor and attach to and degrade proteins that make up the surrounding extracellular matrix (ECM), which separates the tumor from adjoining tissues. By degrading these proteins, cancer cells are able to breach the ECM and escape. The location of the metastases is not always random, with different types of cancer tending to spread to particular organs and tissues at a rate that is higher than expected by statistical chance alone. Breast cancer, for example, tends to metastasize to the bones and lungs. This specificity seems to be mediated by soluble signal molecules such as chemokines and transforming growth factor beta. The body resists metastasis by a variety of mechanisms through the actions of a class of proteins known as metastasis suppressors, of which about a dozen are known. Human cells exhibit different kinds of motion: collective motility, mesenchymal-type movement, and amoeboid movement. Cancer cells often opportunistically switch between different kinds of motion. Some cancer researchers hope to find treatments that can stop or at least slow down the spread of cancer by somehow blocking some necessary step in one or more kinds of motion. All steps of the metastatic cascade involve a number of physical processes. Cell migration requires the generation of forces, and when cancer cells transmigrate through the vasculature, this requires physical gaps in the blood vessels to form. Besides forces, the regulation of various types of cell-cell and cell-matrix adhesions is crucial during metastasis. The metastatic steps are critically regulated by various cell types, including the blood vessel cells (endothelial cells), immune cells or stromal cells. The growth of a new network of blood vessels, called tumor angiogenesis, is a crucial hallmark of cancer. It has therefore been suggested that angiogenesis inhibitors would prevent the growth of metastases. Endothelial progenitor cells have been shown to have a strong influence on metastasis and angiogenesis. Endothelial progenitor cells are important in tumor growth, angiogenesis and metastasis, and can be marked using the Inhibitor of DNA Binding 1 (ID1). This novel finding meant that investigators gained the ability to track endothelial progenitor cells from the bone marrow to the blood to the tumor-stroma and even incorporated in tumor vasculature. Endothelial progenitor cells incorporated in tumor vasculature suggests that this cell type in blood-vessel development is important in a tumor setting and metastasis. Furthermore, ablation of the endothelial progenitor cells in the bone marrow can lead to a significant decrease in tumor growth and vasculature development. Therefore, endothelial progenitor cells are important in tumor biology and present novel therapeutic targets. The immune system is typically deregulated in cancer and affects many stages of tumor progression, including metastasis. Epigenetic regulation also plays an important role in the metastatic outgrowth of disseminated tumor cells. Metastases display alterations in histone modifications, such as H3K4-methylation and H3K9-methylation, when compared to matching primary tumors. These epigenetic modifications in metastases may allow the proliferation and survival of disseminated tumor cells in distant organs. A recent study shows that PKC-iota promotes melanoma cell invasion by activating Vimentin during EMT. PKC-iota inhibition or knockdown resulted in an increase in E-cadherin and RhoA levels while decreasing total Vimentin, phosphorylated Vimentin (S39) and Par6 in metastatic melanoma cells. These results suggested that PKC-ι is involved in signaling pathways which upregulate EMT in melanoma thereby directly stimulates metastasis. Recently, a series of high-profile experiments suggests that the co-option of intercellular cross-talk mediated by exosome vesicles is a critical factor involved in all steps of the invasion-metastasis cascade. Routes Metastasis occurs by the following four routes: Transcoelomic The spread of a malignancy into body cavities can occur via penetrating the surface of the peritoneal, pleural, pericardial, or subarachnoid spaces. For example, ovarian tumors can spread transperitoneally to the surface of the liver. Lymphatic spread Lymphatic spread allows the transport of tumor cells to regional lymph nodes near the primary tumor and ultimately, to other parts of the body. This is called nodal involvement, positive nodes, or regional disease. "Positive nodes" is a term that would be used by medical specialists to describe regional lymph nodes that tested positive for malignancy. It is common medical practice to test by biopsy at least one lymph node near a tumor site when carrying out surgery to examine or remove a tumor. This lymph node is then called a sentinel lymph node. Lymphatic spread is the most common route of initial metastasis for carcinomas. In contrast, it is uncommon for a sarcoma to metastasize via this route. Localized spread to regional lymph nodes near the primary tumor is not normally counted as a metastasis, although this is a sign of a worse outcome. The lymphatic system does eventually drain from the thoracic duct and right lymphatic duct into the systemic venous system at the venous angle and into the brachiocephalic veins, and therefore these metastatic cells can also eventually spread through the haematogenous route. Hematogenous spread This is typical route of metastasis for sarcomas, but it is also the favored route for certain types of carcinoma, such as renal cell carcinoma originating in the kidney and follicular carcinomas of the thyroid. Because of their thinner walls, veins are more frequently invaded than are arteries, and metastasis tends to follow the pattern of venous flow. That is, hematogenous spread often follows distinct patterns depending on the location of the primary tumor. For example, colorectal cancer spreads primarily through the portal vein to the liver. Canalicular spread Some tumors, especially carcinomas may metastasize along anatomical canalicular spaces. These spaces include for example the bile ducts, the urinary system, the airways and the subarachnoid space. The process is similar to that of transcoelomic spread. However, often it remains unclear whether simultaneously diagnosed tumors of a canalicular system are one metastatic process or in fact independent tumors caused by the same agent (field cancerization). Organ-specific targets There is a propensity for certain tumors to seed in particular organs. This was first discussed as the seed and soil theory by Stephen Paget in 1889. The propensity for a metastatic cell to spread to a particular organ is termed 'organotropism'. For example, prostate cancer usually metastasizes to the bones. In a similar manner, colon cancer has a tendency to metastasize to the liver. Stomach cancer often metastasises to the ovary in women, when it is called a Krukenberg tumor. According to the seed and soil theory, it is difficult for cancer cells to survive outside their region of origin, so in order to metastasize they must find a location with similar characteristics. For example, breast tumor cells, which gather calcium ions from breast milk, metastasize to bone tissue, where they can gather calcium ions from bone. Malignant melanoma spreads to the brain, presumably because neural tissue and melanocytes arise from the same cell line in the embryo. In 1928, James Ewing challenged the seed and soil theory, and proposed that metastasis occurs purely by anatomic and mechanical routes. This hypothesis has been recently utilized to suggest several hypotheses about the life cycle of circulating tumor cells (CTCs) and to postulate that the patterns of spread could be better understood through a 'filter and flow' perspective. However, contemporary evidences indicate that the primary tumour may dictate organotropic metastases by inducing the formation of pre-metastatic niches at distant sites, where incoming metastatic cells may engraft and colonise. Specifically, exosome vesicles secreted by tumours have been shown to home to pre-metastatic sites, where they activate pro-metastatic processes such as angiogenesis and modify the immune contexture, so as to foster a favourable microenvironment for secondary tumour growth. Metastasis and primary cancer It is theorized that metastasis always coincides with a primary cancer, and, as such, is a tumor that started from a cancer cell or cells in another part of the body. However, over 10% of patients presenting to oncology units will have metastases without a primary tumor found. In these cases, doctors refer to the primary tumor as "unknown" or "occult," and the patient is said to have cancer of unknown primary origin (CUP) or unknown primary tumors (UPT). It is estimated that 3% of all cancers are of unknown primary origin. Studies have shown that, if simple questioning does not reveal the cancer's source (coughing up blood—"probably lung", urinating blood—"probably bladder"), complex imaging will not either. In some of these cases a primary tumor may appear later. The use of immunohistochemistry has permitted pathologists to give an identity to many of these metastases. However, imaging of the indicated area only occasionally reveals a primary. In rare cases (e.g., of melanoma), no primary tumor is found, even on autopsy. It is therefore thought that some primary tumors can regress completely, but leave their metastases behind. In other cases, the tumor might just be too small and/or in an unusual location to be diagnosed. Diagnosis The cells in a metastatic tumor resemble those in the primary tumor. Once the cancerous tissue is examined under a microscope to determine the cell type, a doctor can usually tell whether that type of cell is normally found in the part of the body from which the tissue sample was taken. For instance, breast cancer cells look the same whether they are found in the breast or have spread to another part of the body. So, if a tissue sample taken from a tumor in the lung contains cells that look like breast cells, the doctor determines that the lung tumor is a secondary tumor. Still, the determination of the primary tumor can often be very difficult, and the pathologist may have to use several adjuvant techniques, such as immunohistochemistry, FISH (fluorescent in situ hybridization), and others. Despite the use of techniques, in some cases the primary tumor remains unidentified. Metastatic cancers may be found at the same time as the primary tumor, or months or years later. When a second tumor is found in a patient that has been treated for cancer in the past, it is more often a metastasis than another primary tumor. It was previously thought that most cancer cells have a low metastatic potential and that there are rare cells that develop the ability to metastasize through the development of somatic mutations. According to this theory, diagnosis of metastatic cancers is only possible after the event of metastasis. Traditional means of diagnosing cancer (e.g. a biopsy) would only investigate a subpopulation of the cancer cells and would very likely not sample from the subpopulation with metastatic potential. The somatic mutation theory of metastasis development has not been substantiated in human cancers. Rather, it seems that the genetic state of the primary tumor reflects the ability of that cancer to metastasize. Research comparing gene expression between primary and metastatic adenocarcinomas identified a subset of genes whose expression could distinguish primary tumors from metastatic tumors, dubbed a "metastatic signature." Up-regulated genes in the signature include: SNRPF, HNRPAB, DHPS and securin. Actin, myosin and MHC class II down-regulation was also associated with the signature. Additionally, the metastatic-associated expression of these genes was also observed in some primary tumors, indicating that cells with the potential to metastasize could be identified concurrently with diagnosis of the primary tumor. Recent work identified a form of genetic instability in cancer called chromosome instability (CIN) as a driver of metastasis. In aggressive cancer cells, loose DNA fragments from unstable chromosomes spill in the cytosol leading to the chronic activation of innate immune pathways, which are hijacked by cancer cells to spread to distant organs. Expression of this metastatic signature has been correlated with a poor prognosis and has been shown to be consistent in several types of cancer. Prognosis was shown to be worse for individuals whose primary tumors expressed the metastatic signature. Additionally, the expression of these metastatic-associated genes was shown to apply to other cancer types in addition to adenocarcinoma. Metastases of breast cancer, medulloblastoma and prostate cancer all had similar expression patterns of these metastasis-associated genes. The identification of this metastasis-associated signature provides promise for identifying cells with metastatic potential within the primary tumor and hope for improving the prognosis of these metastatic-associated cancers. Additionally, identifying the genes whose expression is changed in metastasis offers potential targets to inhibit metastasis. Management Treatment and survival is determined, to a great extent, by whether or not a cancer remains localized or spreads to other locations in the body. If the cancer metastasizes to other tissues or organs it usually dramatically increases a patient's likelihood of death. Some cancers—such as some forms of leukemia, a cancer of the blood, or malignancies in the brain—can kill without spreading at all. Once a cancer has metastasized it may still be treated with radiosurgery, chemotherapy, radiation therapy, biological therapy, hormone therapy, surgery, or a combination of these interventions ("multimodal therapy"). The choice of treatment depends on many factors, including the type of primary cancer, the size and location of the metastases, the patient's age and general health, and the types of treatments used previously. In patients diagnosed with CUP it is often still possible to treat the disease even when the primary tumor cannot be located. Current treatments are rarely able to cure metastatic cancer though some tumors, such as testicular cancer and thyroid cancer, are usually curable. Palliative care, care aimed at improving the quality of life of people with major illness, has been recommended as part of management programs for metastasis. Results from a systematic review of the literature on radiation therapy for brain metastases found that there is little evidence to inform comparative effectiveness and patient-centered outcomes on quality of life, functional status, and cognitive effects. Research Although metastasis is widely accepted to be the result of the tumor cells migration, there is a hypothesis saying that some metastases are the result of inflammatory processes by abnormal immune cells. The existence of metastatic cancers in the absence of primary tumors also suggests that metastasis is not always caused by malignant cells that leave primary tumors. The research done by Sarna's team proved that heavily pigmented melanoma cells have Young's modulus about 4.93, when in non-pigmented ones it was only 0.98. In another experiment they found that elasticity of melanoma cells is important for its metastasis and growth: non-pigmented tumors were bigger than pigmented and it was much easier for them to spread. They showed that there are both pigmented and non-pigmented cells in melanoma tumors, so that they can both be drug-resistant and metastatic. History The first physician to report the possibility of local metastasis from a primary cancerous source to nearby tissues was Ibn Sina. He described a case of breast cancer and metastatic condition in The Canon of Medicine. His hypothesis was based on clinical course of the patient. In March 2014 researchers discovered the oldest complete example of a human with metastatic cancer. The tumors had developed in a 3,000-year-old skeleton found in 2013 in a tomb in Sudan dating back to 1200 BC. The skeleton was analyzed using radiography and a scanning electron microscope. These findings were published in the Public Library of Science journal. Etymology Metastasis is a Greek word meaning "displacement", from μετά, meta, "next", and στάσις, stasis, "placement".
Biology and health sciences
Cancer
Health